Addressing Anxiety: Practitioners’ Examination of Mindfulness in Constructivist Supervision

Jennifer Scaturo Watkinson, Gayle Cicero, Elizabeth Burton

 

It is widely documented that practicum students experience anxiety as a natural part of their counselor development. Within constructivist supervision, mindfulness exercises are used to help counselors-in-training (CITs) work with their anxiety by having them focus on their internal experiences. To inform and strengthen our practice, we engaged in a practitioner inquiry study to understand how practicum students experienced mindfulness as a central part of supervision. We analyzed 25 sandtray reflections and compared them to transcripts from two focus groups to uncover three major themes related to the student experience: (a) openness to the process, (b) reflection and self-care, and (c) attention to the doing. One key lesson learned was the importance of balancing mindfulness exercises to highlight the internal experiences related to anxiety while providing adequate opportunities for CITs to share stories and hear from peers during group supervision. 

Keywords: supervision, mindfulness, counselors-in-training, anxiety, practitioner inquiry

 

It is widely documented that counselors-in-training (CITs) experience anxiety as part of the developmental process (Auxier et al., 2003; Kuo et al., 2016; Moss et al., 2014). Reasons for anxiety include CITs’ doubts about their ability to perform competently within their professional role (Moss et al., 2014) coupled with perfectionism (Kuo et al., 2016). Additionally, Auxier et al. (2003) noted that CITs’ anxiety also stems from the pressure associated with external evaluation provided by supervisors. Wagner and Hill (2015) added that CITs’ need for external validation from their supervisors, coupled with the belief that there is only one right way to counsel clients, also generates anxiety. This need for external validation creates an overreliance on a supervisor’s judgment that could render a CIT helpless (Wagner & Hill, 2015). Although a moderate amount of anxiety may increase a person’s focus and positively impact productivity, too much anxiety impedes learning and growth (Kuo et al., 2016). Hence, there is a need for supervisors to address anxiety early in a CIT’s development to foster self-reliance and professional growth (Ellis et al., 2015; Mehr et al., 2015).

The two lead authors of this article, Jennifer Scaturo Watkinson and Gayle Cicero, are counselor educators who supervised school counseling practicum students and ascribed to a constructivist approach to supervision. While discussing supervision pedagogy, we shared our observations on how anxious our practicum students were to be evaluated and our belief that their anxiety often limited their professional growth and development as counselors. Within constructivist supervision, mindfulness exercises are used to help CITs work with their anxiety by having them focus on their internal experiences of discomfort (Guiffrida, 2015). Thus, we utilized mindfulness as a central approach to helping our students work with their anxiety associated with the counselor developmental process.

To assist in our planning, we reviewed the supervision literature and found that discussions on mindfulness were largely conceptual (Guiffrida, 2015; Johnson et al., 2020; Schauss et al., 2017; Sturm et al., 2012) or outcome-based (Bohecker et al., 2016; Campbell & Christopher, 2012; Carson & Langer, 2006; Daniel et al., 2015; Dong et al., 2017), with limited focus on supervision pedagogy to guide supervisors on how to integrate mindfulness into their practicum seminars, particularly from the perspective of the practitioner. Further, Barrio Minton et al. (2014) and Brackette (2014) confirmed that there was a scarcity of counselor education literature that focused on teaching pedagogy and argued that more research in this area was needed to improve counselor preparation. To add to the current literature on supervision pedagogy and inform our practice, we engaged in a practitioner inquiry study (Cochran-Smith & Lytle, 2009) and formed a professional learning community to investigate how utilizing mindfulness within our supervision could help school counseling practicum students work with their anxiety.

Literature Review

Constructive Supervision
     Supervisors who utilize constructivist principles help CITs make meaning of their experience by examining how their approach benefits their clients (Guiffrida, 2015). Constructivism is built upon the belief that knowledge is not derived from absolute realities but rather localized to specific contexts and personal experiences. McAuliffe (2011) argued that knowledge is “continually being created through conversations” and is not given to the learner through a one-sided expert account. Constructivists believe that learning is “reflexive and includes a tolerance for ambiguity” (McAuliffe, 2011, p. 4). Constructivist supervisors prioritize CITs’ experiences, encouraging them to examine the intent behind their approach and reach their own conclusions. Hence, constructive supervisors help supervisees deconstruct experiences that have multiple “right” approaches to client care while normalizing the anxiety associated with professional growth. Within a constructivist supervision framework, moderate amounts of anxiety are not viewed as problematic but rather are seen as a catalyst for change (Guiffrida, 2015) and part of the learning process (McAuliffe, 2011). Guiffrida (2015) asserted that the aim of supervision in the early stages of counselor development is not to remove feelings of anxiety but rather to help the CIT acknowledge and live with the anxiety. Utilizing mindfulness, supervisors acknowledge CITs’ internal experiences and guide them through intentional mindfulness practices to generate personal and professional reflection and meaning making.

Within constructivist supervision, mindfulness is a central approach to helping CITs work with their anxiety (Guiffrida, 2015). Kabat-Zinn (2016) defined mindfulness as “paying attention in a sustained and particular way: on purpose, in the present moment and nonjudgmentally” (p. 1). Constructive supervisors facilitate learning experiences that promote introspection and intentionally direct CITs to examine their internal experience, without judgment, during times of disequilibrium. Rather than helping a CIT rid themselves of anxiety, the constructivist supervisor acknowledges that anxiety is a normal response to the uncertainty of doing something for the first time (Guiffrida, 2015). Mindfulness provides a platform for a supervisor to normalize anxiety within the supervisory relationship (Sturm et al., 2012). Hence, supervisors can utilize mindfulness to prioritize the CITs’ internal experiences (e.g., doubt, uncertainty, fear) and foster self-reliance.

Mindfulness as an Approach
     Mindfulness practices are linked to the personal and professional growth of CITs (Bohecker et al., 2016; Campbell & Christopher, 2012). Campbell and Christopher (2012) compared counseling students who participated in a mindfulness-based stress reduction (MBSR) program to a control group and found that those who participated in MBSR reported significant decreases in stress, negative affect, rumination, and state and trait anxiety while noting a significant increase in positive affect and self-compassion when compared to participants in the control group. Additionally, Christopher and Maris (2010) reported that supervisees who were exposed to mindfulness were “more open, aware, self-accepting, and less defensive in supervision” (p. 123). Similarly, Bohecker et al. (2016) discovered that CITs who participated in a mindfulness experiential small group saw the benefits of attending to their emotions (e.g., internal experiences) and acknowledged that mindfulness increased self-awareness and promoted objectivity when attending to their thoughts. Having objectivity allowed them to be in the present, which positively affected their behavioral responses (Bohecker et al., 2016).

CITs also experienced benefits to having mindfulness incorporated into their practicum and internship seminar classes. Dong et al. (2017) examined CITs’ response to mindfulness-based activities and discussions during internship seminar. Results suggested that CITs who engaged in mindfulness practices were more focused on the moment and responded to stressors with acceptance and nonjudgment. As a result, CITs were more likely to be “okay with not being okay” when faced with challenging situations (Dong et al., 2017, p. 311). Additionally, Dong and his colleagues noted that participants were able to validate themselves when they made mistakes and were more accepting of their rough edges. Carson and Langer (2006) agreed and added that CITs who received mindfulness as part of their supervision were better able to examine the thoughts that contributed to their anxiety and were more open to accepting their mistakes as learning opportunities. As a result, CITs minimized the focus they put on self-criticism and were less vulnerable when they made mistakes (Carson & Langer, 2006). These studies highlight how CITs benefited from integrating mindfulness into group supervision, yet there is limited research on how counselor educators might structure their practicum seminars to include mindfulness as an integrated approach to supervision.

Purpose of the Present Study
     The purpose of this practitioner inquiry was to inform Watkinson and Cicero’s practice as supervisors of practicum school counseling students within a CACREP-accredited program. We utilized mindfulness as a central approach to group supervision during practicum seminar and wanted to understand how intentional mindfulness exercises that prioritized the CITs’ internal experiences (e.g., uncertainty, doubt, fear) were perceived by our students. By understanding the student experience, we could make informed decisions about how we might improve upon the way we integrate mindfulness into future seminar meetings. Specifically, we were guided by this research question: How are CITs experiencing mindfulness as part of group supervision provided during practicum seminar?

Method

We engaged in a practitioner inquiry study (Cochran-Smith & Lytle, 2009) to examine the application of mindfulness within the context of our practice. Cochran-Smith and Lytle (2009) argued that the examination of one’s practice privileges practitioner knowledge and adds to the overall discourse on teaching pedagogy, as “deep and significant changes in practice can only be brought about by those closest to the day-to-day work of teaching and learning” (p. 6). Although not intended to generalize knowledge, practitioner inquiry positions the researcher as a participant to uncover tensions and challenges that come from applying theory to practice while enhancing the knowledge of the practitioner doing the investigation (Cochran-Smith & Lytle, 2009). Thus, we intended to reflect upon how we integrated mindfulness into supervision by understanding the experiences of our practicum students.

Participants
     We gained approval from our university’s IRB to conduct the study and invited all 33 CITs enrolled in our practicum sections to participate. Twenty-five (76%) CITs agreed to participate. Of the 25 participants, 24 identified as female (96%) and one identified as male (4%). Sixteen students (64%) self-identified as White/Caucasian, five (20%) as African American, three (12%) as Hispanic, and one (4%) as other. Eighty-four percent of participants were full-time students and 16% identified as part-time. Students were told they could withdraw their participation at any time. All practicum students completed their field experience in public schools.

To safeguard participants from believing they were required to join the study, Watkinson and Cicero were not aware of which students agreed to participate until the end of the semester, when grades were submitted. To protect participant identity until after the semester, we took the following steps: 1) the third author, Elizabeth Burton, was the only one who knew the identity of the participants; 2) Burton recruited participants, stored data (erasing identifying information), and communicated with the participants; 3) the data source labeled sandtray reflections included activities that all CITs completed as part of a required seminar experience; 4) a focus group was held after the semester concluded and grades were submitted; and 5) during data collection, Watkinson and Cicero never discussed the study with any of the CITs enrolled in practicum.

Seminar Context
     The practicum course is the first field experience for CITs enrolled in the school counseling master’s program. As per the CACREP 2016 Standards, the practicum experience is a 100-hour experience in which 40% of those hours are in direct service. In addition to meeting those direct hours by working with several individual clients, practicum students are also required to design and run a small counseling group and deliver several classroom lessons within schools. Further, CACREP-accredited programs must provide practicum students with 1.5 hours on average of group supervision per week throughout the duration of the semester. Thus, our practicum seminars were designed to provide CITs with the required group supervision.

All practicum seminar sessions met in person except for one, which was held synchronously through Zoom, a web conferencing platform. There were three sections of practicum, two taught by Cicero and one taught by Watkinson. Watkinson and Cicero drew upon constructive supervision principles and mindfulness core concepts (e.g., self-compassion, present moment, and nonjudgment) to guide the planning of the practicum seminars. We maintained similar course structures, objectives, and learning outcomes utilizing similar room arrangements, mindfulness exercises, and structured learning experiences. Mindfulness exercises were central to the practicum seminar and were focused on the practicum students’ internal experiences. The 15 weekly practicum seminars were 90 minutes in length, and student-to-faculty ratios were 9:1 for two of the practicum sections and 6:1 for the third. The room arrangement consisted of a circle of chairs for students to use during the opening and closing of the seminar, along with a designated workspace for students to sit at tables to take notes or complete reflective class experiences. Soft meditation music played as students entered the room and was turned off to signal the beginning of class.

Watkinson and Cicero engaged in weekly collaborative planning meetings throughout the 15-week semester to plan their seminar meetings and share insights related to student learning. The instructional design was experiential and incorporated mindfulness exercises during the opening of the seminar to bring attention to the “here and now,” breath, nonjudgment, and self-compassion. Cicero was previously trained in mindfulness and exercises were selected based upon her training; Cicero taught Watkinson how to implement those mindfulness exercises during their weekly meetings. Many of the opening mindfulness exercises can be found through internet searches.

Structure of Seminar Meetings
     The structure and room arrangement for each practicum seminar were consistent across the three sections. Fourteen of the 15 seminar meetings began with the CITs participating in a 5-minute mindfulness opening that transitioned into structured learning experiences and ended with a sharing circle. Seminar Meeting 11 was entirely dedicated to mindfulness, engaging practicum students in several mindfulness activities for the purpose of drawing their attention to breath and reflection.

Mindfulness Openings
     The 5-minute mindfulness openings were scripted and consisted of either a guided meditation (e.g., Calm Still Lake, A River Runs Through It), intentional breathing exercises (e.g., Balloon Breath, Meditative Chimes) or chair yoga (e.g., Mountain Pose, Warrior 2). Each mindfulness opening concluded with reflective questions to increase awareness of the present moment (e.g., What was this experience like for you?). The meditation exercises were varied to introduce CITs to different approaches they might want to try outside of seminar for personal use or in their own practice with K–12 students.

Structured Learning Experiences
     After the mindfulness opening, CITs participated in structured learning experiences that focused on either counselor development, case conceptualization, group counseling leadership, evidence-based planning, or classroom curriculum development and instruction. Guided by constructivist supervision principles, two of the structured learning experiences implemented were metaphorical case drawing (Guiffrida, 2015) and sandtray (Guiffrida, 2015; Saltis et al., 2019).

     Metaphorical Case Drawing. Guiffrida’s (2015) metaphorical case drawing was used to assist CITs in the development of their case conceptualization skills. In Guiffrida’s work, a metaphorical case drawing has three steps. First, CITs reflect upon six items that highlight their internal experiences and perspectives specific to an individual counseling session with one of their clients: 1) identification of the client’s primary concern, 2) description of the client and CIT interaction, 3) CIT’s intention for the session, 4) CIT’s description of how they viewed their performance as a counselor during the session, 5) general assessment of how the session went, and 6) statement on what the CIT thought the client gained from the session. Second, CITs use images and/or metaphors to respond to three of the six items above to create a case drawing. Lastly, utilizing their case drawings, CITs share their cases with the supervisor and other supervisees. Through the presentation of their case, the CITs interpreted their work while the supervisor and other supervisees listened and asked questions to facilitate deeper insight by offering alternative perspectives.

     Sandtray. Although sandtray is typically used in supervision to help CITs develop their case conceptualization skills (Anekstein et al., 2014; Guiffrida, 2015; Guiffrida et al., 2007), we modified our use of sandtray to focus the CITs on their developmental journey as counselors. Like the metaphorical case drawing, the sandtray facilitates an internal examination where CITs get to interpret their own experience (Guiffrida et al., 2007). The sandtray was used in Seminar Meetings 6 and 13 to document how CITs were encountering practicum at two different times in the semester. The written reflections that followed the sandtray were used as a data source for this study and are therefore described in further detail.

Prior to creating an image in the sandtray, CITs were asked to journal about their experience as a practicum student. The prompt was left open so that CITs would have the freedom to focus on the most salient part of their experience. Next, CITs were partnered to create a sandtray image and each pair were given a large box that contained sand and a small baggie filled with a variety of miniature objects. CITs had 5 minutes to create an image in response to this prompt: Create an image that represents your practicum experience thus far. At the conclusion of the 5 minutes, CITs shared their stories with their partners. After everyone created a sandtray image and shared, CITs wrote a reflection in response to this prompt: Drawing from the sandtray exercise and sharing, describe your experience in practicum thus far. Identify and describe the thoughts and feelings you have as you begin your work with students. These written reflections were submitted to the professor at the conclusion of the seminar meeting.

At Seminar Meeting 13, CITs created and shared their sandtray images. Following the same procedure as identified in Seminar Meeting 6, CITs engaged in the sandtray activity again to create a new image in response to a new prompt: Create an image that described your overall experience in practicum. After creating and sharing of their image with a partner, students reflected and responded in writing to a final prompt: Drawing from the sandtray exercise, describe your experience in practicum. Identify and describe your thoughts and feelings now that practicum has come to an end. What have you learned about yourself? Written reflections were completed during the seminar meeting and submitted to the professor when class ended.

Sharing Circle
     After the structured learning experience, each seminar concluded with a 5–10 minute sharing circle where students summarized new insights and identified actions to implement at their practicum site. The sharing circle was guided by two questions: What are some key takeaways from today’s seminar? and How might we use what we have learned today within our own practice?

Structure of Mindfulness Seminar Meeting
     Seminar Meeting 11 was fully dedicated to the practice of mindfulness and did not follow the above seminar format and structure. During this one 90-minute class, CITs identified an intention, created a mindfulness jar, journaled, and walked a labyrinth. Johnson et al. (2020) argued that CITs who receive mindfulness as part of their supervision should start or maintain a mindfulness practice of their own. Yet there is nothing in the research that identifies specific mindfulness exercises as being essential to that practice, only that CITs should be exposed to mindfulness as part of the classroom experience (Johnson et al., 2020). Thus, our intent for this seminar meeting was to engage CITs in mindfulness exercises that would encourage meditation and reflection. For this class we requested a large room to accommodate a small circle arrangement of 10 chairs and three stations: a labyrinth, creating a mindfulness jar, and journaling. During this seminar meeting, the CITs were instructed to visit the three stations at their own pace and to self-select the order in which they participated in those stations. Class opened with a mindfulness exercise that focused on breath and ended with a sharing circle to debrief. An example of a closing question posed by the professors during the sharing circle is: What insights would you like to share about your experience in seminar today?

     Labyrinth. CITs were given a brief description of a labyrinth along with written instructions on how to set an intention and walk the labyrinth. We created a floor labyrinth for use during the seminar. CITs set their intention prior to walking the labyrinth. Some examples of intentions were to be open to the process or to demonstrate self-compassion. Once inside the labyrinth, CITs would follow the path and could walk the labyrinth as many times as they desired.

     Creating Mindfulness Jars. CITs created a mindfulness jar from an empty 8-ounce bottle, fine glitter, clear hand soap, confetti, and water. Directions on how to create a mindfulness jar were provided at the station. CITs were encouraged to use the mindfulness jar during the 90-minute seminar as a focal point to guide their breath during reflection time.

     Journaling. CITs were provided paper, pens, markers, and crayons for journaling at the beginning of the seminar. CITs were provided minimal directions on what they were to journal, outside of selecting a quiet place in the room to write and reflect upon their experience during the session. Journals were private and CITs were not asked to share what they wrote with the professors or other CITs.

Data Sources and Collection
     We used three data sources to understand CITs’ experience with mindfulness as part of supervision: supervisor observations, sandtray reflections from weeks 6 and 13, and focus group transcripts. Watkinson and Cicero captured supervisor observations in their meeting minutes, which also included specific plans for each seminar session along with assumptions and observations about CIT learning. The written sandtray reflections captured CITs’ overall experience in practicum at two different points in the semester. Using a multi-step process, the sandtray served as a structured learning experience completed and collected during the seminar meetings. Data from sandtray reflections taken at the end of the semester (week 13) were analyzed to examine how CITs reflected on their overall practicum experience at the completion of the semester.

All 25 participating CITs were invited to participate in a focus group. Of the 25, nine (36%) attended and two different focus groups were held to accommodate their schedules. Each focus group was held virtually on Zoom, recorded, and transcribed, and took place at the end of the academic semester after grades were issued. Focus groups lasted 60 minutes, were co-led by Watkinson and Cicero, and served as a type of member checking. Guiding questions/prompts were: Describe your experience in practicum this semester, Describe your feelings throughout the semester, and What was it like for you to engage in mindfulness as part of your development as a counselor?

Trustworthiness
     Watkinson and Cicero are both counselor educators at a university located within the Mid-Atlantic region of the United States. Watkinson is a Caucasian middle-aged female with 14 years of experience as a school counselor and over 10 years of experience as a counselor educator. Cicero is a Caucasian middle-aged female with 30 years of experience in a large public school district as a teacher, school counselor, and a district-level administrator of school counseling and student service programs, as well as 3 years of experience as a counselor educator. Watkinson and Cicero are licensed professional counselors, board approved certified supervisors, and certified school counselors. Burton was a first-semester school counseling student and served as Watkinson’s graduate assistant. She is a Caucasian female with no prior experience in schools or as a counselor. At the time of data analysis, she had finished her first year of coursework and offered an additional perspective on how the data could be interpreted.

Watkinson and Cicero held certain biases and assumptions about how mindfulness might be experienced by CITs in their practicum sections. We assumed that mindfulness was beneficial to CIT counselor development yet had no preconceived ideas as to the type of benefit it would have on their professional growth outside of our assumption that mindfulness could help CITs work with their anxiety. Additionally, we found that CITs, particularly at the practicum level, were anxious and worried about their performance and believed that supervision was needed to attend to that anxiety. Lastly, we shared a strong desire to better understand our own practice and were therefore open and expected feedback to strengthen that practice.

Trustworthiness was addressed in a variety of ways. In practitioner research, validation is obtained through a form of peer review, where practitioner researchers collaborate to discuss and reflect upon their experiences through peer feedback (Anderson & Herr, 1999; Cochran-Smith & Lytle, 2009). Thus, Watkinson and Cicero met weekly during the 15-week semester to share observations and obtain feedback related to their own practice. Further, during these meetings we engaged in critical dialogue to disrupt previously held assumptions and biases. For example, we challenged each other to share evidence to support the interpretations we made about how students were experiencing the course, asking the question, How do you know? Observations that included peer feedback were recorded in our meeting minutes.

Second, we engaged in prolonged observation of participants as we worked alongside CITs, acting in the role of both inside and outside observers during the 15-week semester. As Creswell (2013) asserted, validation of findings comes from prolonged engagement and persistent observation of participants. Third, we triangulated data, comparing Seminar Meeting 13 sandtray reflection data across the three practicum sections to the focus group transcripts (Merriam, 2009). Fourth, the focus groups served as a type of member checking (Merriam, 2009) to validate and refine our analysis of the final sandtray reflections to the perceptions that were shared by students in the focus groups.

Data Analysis
     We formed a research team and regularly met to debate and discuss the data during the analysis process. Data from the sandtray reflections taken during Seminar Meeting 13 were organized into a table for analysis so that we could track individual responses and practicum sections. Drawing from Creswell’s (2013) process for analyzing data, we each familiarized ourselves with the data by independently engaging in multiple readings of the final sandtray reflections and focus group transcripts, including memoing to capture initial impressions and key concepts. After familiarizing ourselves with the data, we met as a research team to share initial insights and bracket assumptions. Next, we reviewed each line of the final sandtray reflection data independently to identify initial codes. As a research team, we shared our codes, discussed discrepancies, and reviewed units of data until consensus was reached and a codebook was created. Next, codes from the final sandtray reflections were compared to the focus group transcripts and refined. Lastly, we looked for patterns in the data and organized them into themes.

Findings

To examine our supervision practice, we sought to understand how CITs experienced mindfulness as a supervision approach. Prioritizing mindfulness within our practicum seminar meetings focused our students on the examination and understanding of their internal experiences and meeting uncertainty with nonjudgment and self-compassion. After analyzing the data, three major themes emerged: openness to the process of becoming, reflection and self-care, and attention to the doing.

Openness to the Process of Becoming
     Although CITs acknowledged the challenges associated with their experience, they also expressed an openness to becoming a counselor who generated personal insight, self-compassion, and wisdom. As one participant stated, “It’s natural to feel uncertain when learning new concepts. However, uncertainty should not consume you and cause your thoughts to become negative. Give yourself permission to grow.” Another wrote, “The biggest growth I’ve seen in myself is self-awareness. Regardless of my weaknesses and shortcomings, I am good enough!! The greatest gift I can give to students is to be myself.”

CITs felt hopeful and purposeful in their development as counselors and expressed excitement about their professional growth. As one participant remarked, “In the beginning everything seemed new and scary, but when I look at the end, I see so much growth. I will continue to grow and expand. I look forward to my career.” Another wrote:

At the beginning of practicum, I felt awkward and unsure of myself. I felt self-conscious. At the end of practicum, I can feel the growth I’ve made. I no longer feel awkward or self-conscious. I know who I am and what kind of counselor I am.

     Acknowledging the emotional challenges of their professional journey, CITs highlighted the emotional discomfort they felt at the start of practicum. One student stated: “Anxiety from the beginning—feeling of anxiety and not knowing what to expect.” Another mentioned in her reflection, “I definitely had feelings of inadequacy. I just didn’t think that I was doing what I needed to do.” Some students expressed this discomfort as cyclical:

Understanding everything that was going to be happening and everything that was expected and what it all entails, I definitely started to get more anxious and got comfortable and then getting [anxious] again. So, kind of like back and forth a lot.

Students compared this back and forth feeling to that of a rollercoaster: “I feel like some weeks I’d be on fire, like, yeah, I did really good . . . there would be other days where it’s like my timing is off and I’m uncomfortable in the classrooms . . . it was definitely a rollercoaster feeling.”

Another student agreed, sharing that they “would definitely second the rollercoaster. The beginning was very overwhelming for sure . . . that rollercoaster of like the expectation of learning . . . feeling like you’re doing really bad and then learning what is good.”

There was also a sense of wisdom in how the participants described what they gained from this experience of becoming. One participant mentioned “feeling depressed and anxious. . . . Fast forward 2 months and I had grown so much. I can’t believe in only 60 days my attitude toward practicum changed so dramatically. . . . change and growth take time, but it does happen.” Another CIT stated:

In my first reflection, there seemed to be a lot of low points, but I was hopeful things would get better. In my second reflection, I realized that the things I have done have made an impact and the highs and lows both got me to this point.

     CITs expressed recognition of the highs and lows experienced and within that recognition focused on a greater purpose. As one wrote,

I started out being very unaware and doubtful of myself. I was overwhelmed and wasn’t seeing the beauty in the process of learning who I am as a counselor. I began to see the small and big impacts that I had with my students in 15 weeks. I saw the power that comes with being a counselor and am more mindful of the impact I have and will make.

Another reflected:

The biggest growth I’ve seen in myself is self-awareness. Awareness of my strengths and weaknesses so that I can be mindful of how to be the best I can be for all students. So that I can strive to have a positive impact on others.

Another mentioned:

At this point in the journey, I finally met my passion. I always wanted to have an impact not because I taught a great lesson, but because I helped a student and showed I cared. I grew by knowing how to use my tools to make a difference while finding my style of counseling in the process. The growth hasn’t stopped and needs refinement, but I want each day to be better for myself and the students.

     Additionally, CITs perceived feedback to be essential to their growth process. One CIT reflected that they “learned to be open to change . . . accepting feedback and letting it help me make positive changes throughout this journey. There is always a need for continued growth and development.” Another remarked:

I’ve realized that in order for me to learn and grow I have to be more open [to feedback]. Being closed off means that I am only working with what I know, which is not helpful to me personally, but also what we tell students not to do. Being open has forced me to become a more active participant in my learning and take more risks . . . it will all be worth it in the end.

Another practicum student focused on gratitude:

Feedback and supervision helped to change my perspective and boost my confidence. Things about myself that I thought had nothing to do with being a counselor were highlighted and the areas for improvement were spoken of and tended to with genuine care. I’m grateful to have had the experience of becoming so reflective. I’m grateful for the lows and the moments where I felt as though I was at a standstill. I’m grateful for falling so hard that my only option was to reach out and ask for help. I’m grateful for the hurdles . . . and I’m grateful for the ever-flowing river. I’m grateful for the art and the science of counseling. I’m grateful for who I’m becoming in the process of becoming. I’m grateful for grace and for the realization of how necessary it is. I’m grateful for family and adopted big sisters in the program. I’m grateful to have had the chance to say “I don’t know” and keep learning.

     The theme of openness to the journey was also highlighted in the acknowledgement of not being in control. There was an openness to embracing the unknown and the chaos associated with not having everything figured out, as one CIT concluded:

In the beginning, I was working really hard to try to figure everything out. I saw obstacles everywhere. As I moved on, I started to focus on counseling in a way that didn’t put pressure on me to do all of the right things. I started to grasp the essence of counseling and what makes the profession unique.

Another noted:

One major insight is that it was a chaotic journey. It’s not straightforward, and I don’t always know the path I’ll take, but I am continuously growing and learning about myself as a person and as a school counselor. . . . I am enjoying the unknown. I like what I am doing, and I like moving forward, even if I am unsure at times.

Reflection and Self-Care
     CITs reported that the seminar was very reflective, which gave them a sense of calm and a new appreciation for self-care. As one student commented, “I did, like everyone else, find [the seminar class] to be calming, enjoyable, and reflective.” Reflection generated by the mindfulness exercises gave CITs an opportunity to get to know themselves:

It was definitely a positive experience for sure. I would agree it was very calming and super reflective. I felt like I understood myself as a counselor and also just like as a person on my own personal journey. Even aside from that I felt like I learned a lot.

Further, CITs expressed the importance of reflection and giving themselves the space to be in the present moment as a means of self-care:

I am so wrapped up in everything that is going on in my life and getting everything done. And school takes a lot of everything I’ve got . . . to be reminded and practice [mindfulness] on a regular basis . . . but doing it each week in class, helped me to do it at home. So that was giving me that practice and repetition and it really made a huge difference.

Another mentioned, “There’s just so many things going on in your life . . . to be reflective and just calm my inner self and learn how to breathe . . . this was a life skill class for me,” and a different student elaborated, “I was so grateful for it because I realized how much self-reflection I have to do . . . that I need to keep doing it and making it a priority.”

Attention to the Doing
     Although students valued the priority that we placed upon mindfulness to better understand their internal experiences, some wished that we had provided more time for them to share stories about their practicum sites. As one CIT stated, “I would have liked to have had time each week for all of us to share what was going on and to learn from each other’s situations and to support each other in those situations.” Additionally, CITs desired to know more about what was happening at different practicum sites because of the belief that they were missing an experience. As one CIT explained, “I didn’t have a role model so it was nice to hear everyone else’s role models . . . so I could just learn from pieces I wasn’t getting [at my site].” Another CIT agreed: “I think it definitely would have helped to hear more about other people’s sites just because I wasn’t really getting a ton out of my site. Or I did get things, but differently.” Another mentioned, “I wanted to hear other people’s experiences because I felt like everyone was at such different schools and different levels . . . we’re all experiencing different things.”

Discussion

We sought to understand how practicum students experienced mindfulness exercises within supervision to improve our own practice. To help practicum students work with their anxiety, mindfulness exercises were heavily integrated into the course structure to engage all CITs in weekly reflective exercises that directed their attention toward their internal experiences. Practicum students were invited to acknowledge their anxiety and respond to it with nonjudgment and self-compassion. Mindfulness core concepts (e.g., being present, nonjudgment, self-compassion) served as a framework for how practicum students made meaning of their internal experiences. Although our focus was not to determine the impact mindfulness had on our practicum students, to inform our practice we did seek to gain a descriptive understanding of how our students experienced mindfulness as part of their group supervision.

Open to the Process of Becoming
     Our CITs reported being open to the process of becoming a counselor that included acceptance of where they were in the developmental process. Through acceptance, CITs reported being aware of the uncertainty associated with learning a new skill and leaned into that anxiety with self-compassion and nonjudgment. Further, they were able to acknowledge the ambiguity (e.g., “rollercoaster”) associated with learning something new and the tension that comes with being uncomfortable. Bohecker et al. (2016) found similar results in their qualitative study, acknowledging that CITs who integrated mindfulness practices into their daily lives were better able to handle the ambiguity associated with counselor development. As part of her correlational study, Fulton (2016) found that self-compassion, a core principle of mindfulness, was predictive of a CIT’s tolerance to handle ambiguity. Thus, our findings support and add to the current literature by describing qualitatively how practicum students made meaning of that uncertainty to normalize the tension that was associated with it.

Self-Care
     Participants saw reflection as a form of self-care, finding meditation to be relaxing, and they acknowledged that meditating each week during seminar allowed them to stay in the present moment. Similarly, Duffy and colleagues (2017) found that CITs in their qualitative study who participated in weekly mindfulness exercises as part of a core class described mindfulness as reflective, providing them with a sense of calm and ability to stay within the present. Banker and Goldenson (2021) noted that CITs within their qualitative study also reported personal benefits to utilizing mindfulness within their practicum seminar, including being able to better transition to the present moment. Thus, the experiences our practicum students had connecting reflection as a form of self-care are similar to the experiences of other CITs who practiced regular meditation.

Attention to the Doing
     Although CITs saw value in participating in group supervision that integrated mindfulness as a central approach within their practicum seminars, some CITs wanted more focus on learning about the experiences other practicum students had at their school sites. Specifically, CITs desired to know more about school counselor practice by sharing stories of what their peers were doing, as well as the work being done by the practicing school counselor. Participants sought more understanding on school counselor practice either because of a lack of modeling at their own schools or professional curiosity. Similarly, Watkinson et al. (2018) noted that counselor educators reported discrepancies between how school counseling CITs were being prepared versus what they experienced in the field. For example, counselor educators shared that they often taught content (e.g., implementing a comprehensive school counseling program) that their school counseling CITs did not see modeled at their schools. Thus, it would seem logical that CITs at the practicum level would want to have more exposure to activities that school counselors were doing at other sites, especially if what they were observing was not aligned with their training.

Reflecting on Our Own Practice: Lessons Learned
     Through this practitioner inquiry, we gained some valuable insight into how CITs experienced mindfulness that has informed our practice. First, by analyzing our CITs’ experiences in practicum, we believed that they benefited from the mindfulness exercises as a way to work with their anxiety. Specifically, we were encouraged that practicum students expressed an openness to the process of becoming a counselor, which included self-acceptance. CITs stated they were more open to feedback and less critical of themselves, recognizing they still had much to learn. Second, we learned that although the integration of mindfulness as a central approach to our supervision could be helpful to practicum students, CITs also expressed a desire to have more time dedicated to hearing about the work their peers and other practicing school counselors were doing within schools. This was particularly important if the CIT believed their site was lacking. Hence, as supervisors we needed to create a balance between engaging our CITs in mindfulness practices and the need that our CITs had to share work stories and gain some practical insight into the work of school counselors.

Cochran-Smith and Lytle (2009) highlighted that a benefit to practitioner inquiry was the uncovering of professional dilemmas that naturally occur when you apply a concept to practice. For us, seeking balance challenged us to consider what specific mindfulness exercises were critical to maintain. Watkinson et al. (2018) also found that counselor educators struggled with balancing the amount of content that needs to be covered in a course versus the depth of understanding that is needed for CITs to apply the content learned. Thus, we too needed to decide on depth versus breadth, which boiled down to identifying the frequency with which we had our practicum students participate in mindfulness exercises in each seminar meeting to gain benefit.

Because the recent literature suggested that exposure to weekly mindfulness exercises within core courses and clinical seminars benefited CITs (Campbell & Christopher, 2012; Dong et al., 2017; Fulton, 2016), we decided to keep the opening mindfulness meditative exercises and remove the one seminar session we had dedicated to mindfulness. Further, we increased the time CITs spent in sharing circles to include space for CITs to talk about the work being done by school counselors (or themselves) at practicum sites. Lastly, we looked for opportunities to highlight mindfulness principles in case conceptualization.

To integrate mindfulness principles into case conceptualization, Sturm and colleagues (2012) proposed using metaphors (i.e., Earth, Air, Water, Space and Fire) that represent ancient Buddhist principles when conceptualizing cases. For instance, the Earth metaphor symbolizes grounding, and when applied to case conceptualization enables CITs to consider what grounds them personally and theoretically when treating a client (Sturm et al., 2012). Another example of integrating core mindfulness principles into supervision is through free association (Schauss et al., 2017). Schauss et al. (2017) used free association to help CITs attend to the present by asking questions that focused CITs on the here and now (Schauss et al., 2017). Sample questions include: What are you feeling in this moment? When and in what ways has this feeling surfaced during your counseling experiences at your school site? How does your body respond to this type of feeling and what is the impact on your counseling experiences? By integrating mindfulness principles into skill development (e.g., case conceptualization), our practicum students would be further exposed to core mindfulness principles.

Limitations and Future Research

Our intention of sharing the findings from this study was to offer a practitioner’s perspective on how CITs experienced mindfulness within supervision to contribute to the broader discussions on counselor education pedagogy. Generalization was not the objective, and findings need to be interpreted within the context of practice. Further, this study did not examine the impact that mindfulness had on CIT anxiety, and we are not able to infer such causal relationships. To strengthen our understanding of counselor education pedagogy, future studies could build upon our findings to identify which mindfulness exercises had the greatest impact on helping CITs work with their anxiety. Understanding which mindfulness exercises impact anxiety, counselor educators could be more intentional with the exercises they include, thus making room for other supervision priorities (e.g., CITs hearing about the work of practicing school counselors).

Future research could also investigate how supervisors’ modeling of core mindfulness principles could impact counselor development and the supervisory alliance. Daniel et al. (2015) have called upon researchers to increase understanding of how supervisors’ mindfulness behaviors impact the supervisory relationship. Future research could attend to this deficiency within the literature by looking at the relationship between a supervisor’s mindfulness behaviors and the supervisory relationship through a practitioner lens.

Conclusion

By incorporating a mindfulness approach into supervision, we learned that CITs were open to working with the anxiety associated with becoming a counselor. This openness or self-acceptance gave them the perspective to appreciate the impact this experience had on them and others while also valuing the benefits of reflection through meditation. The intent of this study was not to generalize the experience of these CITs to others; rather, it was to generate conversation and an understanding of how CITs experienced mindfulness to better our practice as supervisors. Although CITs saw benefits of mindfulness within supervision, they also desired more conversations on counselor practice to better their understanding of the role school counselors have in schools. As supervisors, we understand mindfulness should be balanced with the need for CITs to learn about the work of the school counselor through the sharing of experiences at their practicum sites. Beginning each session with a mindfulness exercise and infusing mindfulness core principles into case conceptualization could be a means to achieve such balance.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

References

Anderson, G. L., & Herr, K. (1999). The new paradigm wars: Is there room for rigorous practitioner knowledge in schools and universities? Educational Researcher, 28(5), 12–21. https://www.jstor.org/stable/1176368

Anekstein, A. M., Hoskins, W. J., Astramovich, R. L., Garner, D., & Terry, J. (2014). Sandtray supervision: Integrating models and sandtray therapy. Journal of Creativity in Mental Health, 9(1), 122–134. https://doi.org/10.1080/15401383.2014.876885

Auxier, C. R., Hughes, F. R., & Kline, W. B. (2003). Identity development in counselors-in-training. Counselor Education and Supervision, 43(1), 25–38.
https://doi.org/10.1002/j.1556-6978.2003.tb01827.x

Banker, J. E., & Goldenson, D. (2021). Mindfulness practices in supervision: Training counselors’ experiences. The Family Journal, 29(1), 17–23. https://doi.org/10.1177/1066480720954204

Barrio Minton, C. A., Wachter Morris, C. A., & Yaites, L. D. (2014). Pedagogy in counselor education: A 10-year content analysis of journals. Counselor Education and Supervision, 53(3), 162–177.
https://doi.org/10.1002/j.1556-6978.2014.00055.x

Bohecker, L., Vereen, L. G., Wells, P. C., & Wathen, C. C. (2016). A mindfulness experiential small group to help students tolerate ambiguity. Counselor Education and Supervision, 55(1), 16–30.
https://doi.org/10.1002/ceas.12030

Brackette, C. M. (2014). The scholarship of teaching and learning in clinical mental health counseling. New Directions for Teaching and Learning, 2014(139), 37–48. https://doi.org/10.1002/tl.20103

Campbell, J., & Christopher, J. (2012). Teaching mindfulness to create effective counselors. Journal of Mental Health Counseling, 34(3), 213–226. ttps://doi.org/10.17744/mehc.34.3.j75658520157258l

Carson, S. H., & Langer, E. J. (2006). Mindfulness and self-acceptance. Journal of Rational-Emotive and Cognitive-Behavior Therapy, 24(1), 29–43. https://doi.org/10.1007/s10942-006-0022-5

Christopher, J. C., & Maris, J. A. (2010). Integrating mindfulness as self-care into counselling and psychotherapy training. Counselling and Psychotherapy Research, 10(2), 114–125. https://doi.org/10.1080/14733141003750285

Cochran-Smith, M., & Lytle, S. L. (2009). Inquiry as stance: Practitioner research for the next generation. Teachers College Press.

Creswell, J. W. (2013). Qualitative inquiry & research design: Choosing among five approaches. (3rd ed.). SAGE.

Daniel, L., Borders, L. D., & Willse, J. (2015). The role of supervisors’ and supervisees’ mindfulness in clinical supervision. Counselor Education and Supervision, 54(3), 221–232. https://doi.org/10.1002/ceas.12015

Dong, S., Campbell, A., & Vance, S. (2017). Examining the facilitating role of mindfulness on professional identity development among counselors-in-training: A qualitative approach. The Professional Counselor, 7(4), 305–317. https://doi.org/10.15241/sd.7.4.305

Duffy, J. T., Guiffrida, D. A, Araneda, M. E., Tetenov, S. M. R., & Fitzgibbons, S. C. (2017). A qualitative study of the experiences of counseling students who participate in mindfulness-based activities in a counseling theory and practice course. International Journal for the Advancement of Counselling, 39(1), 28–42. https://doi.org/10.1007/s10447-016-9280-9

Ellis, M. V., Hutman, H., & Chapin, J. (2015). Reducing supervisee anxiety: Effects of a role induction intervention for clinical supervision. Journal of Counseling Psychology, 62(4), 608–620.
https://doi.org/10.1037/cou0000099

Fulton, C. L. (2016). Mindfulness, self-compassion, and counselor characteristics and session variables. Journal of Mental Health Counseling, 38(4), 360–374. https://doi.org/10.17744/mehc.38.4.06

Guiffrida, D. A. (2015). Constructive clinical supervision in counseling and psychotherapy. Routledge.

Guiffrida, D. A., Jordan, R., Saiz, S., & Barnes, K. L. (2007). The use of metaphor in clinical supervision. Journal of Counseling & Development, 85(4), 393–400.
https://doi.org/10.1002/j.1556-6678.2007.tb00607.x

Johnson, D. A., Ivers, N. N., Avera, J. A., & Frazee, M. (2020). Supervision guidelines for fostering state-mindfulness among supervisees. The Clinical Supervisor, 39(1), 128–145.
https://doi.org/10.1080/07325223.2019.1674761

Kabat-Zinn, J. (2016). Mindfulness for beginners: Reclaiming the present moment—and your life. Sounds True, Inc.

Kuo, H.-J., Landon, T. J., Connor, A., & Chen, R. K. (2016). Managing anxiety in clinical supervision. Journal of Rehabilitation, 82(3), 18–27.

McAuliffe, G. (2011). Constructing counselor education. In G. McAuliffe & K. Eriksen (Eds.), Handbook of counselor preparation: Constructivist, developmental, and experiential approaches (pp. 3–12). SAGE.

Mehr, K. E., Ladany, N., & Caskie, G. I. L. (2015). Factors influencing trainee willingness to disclose in supervision. Training and Education in Professional Psychology, 9(1), 44–51.
https://doi.org/10.1037/tep0000028

Merriam, S. B. (2009). Qualitative research: A guide to design and implementation (3rd ed.). Jossey-Bass.

Moss, J. M., Gibson, D. M., & Dollarhide, C. T. (2014). Professional identity development: A grounded theory of transformational tasks of counselors. Journal of Counseling & Development, 92(1), 3–12. https://doi.org/10.1002/j.1556-6676.2014.00124.x

Saltis, M. N., Critchlow, C., & Smith, J. A. (2019). Teaching through sand: Creative applications of sandtray within constructivist pedagogy. Journal of Creativity in Mental Health, 14(3), 381–390.
https://doi.org/10.1080/15401383.2019.1624995

Schauss, E., Steinruck, R. E., & Brown, M. H. (2017). Mindfulness and free association for multicultural competence: A model for clinical group supervision. Journal of Counselor Practice, 8(2), 102–119. https://doi.org/10.22229/xpw610283

Sturm, D. C., Presbury, J., & Echterling, L. G. (2012). The elements: A model of mindful supervision. Journal of Creativity in Mental Health, 7(3), 222–232. https://doi.org/10.1080/15401383.2012.711718

Wagner, H. H., & Hill, N. R. (2015). Becoming counselors through growth and learning: The entry transition process. Counselor Education and Supervision, 54(3), 189–202. https://doi.org/10.1002/ceas.12013

Watkinson, J. S., Goodman-Scott, E. C., Martin, I., & Biles, K. (2018). Counselor educators’ experiences preparing preservice school counselors: A phenomenological study. Counselor Education and Supervision, 57(3), 178–193. https://doi.org/10.1002/ceas.12109

 

Jennifer Scaturo Watkinson, PhD, LCPC, is a certified school counselor and serves as an associate professor and the School Counseling Program Director at Loyola University Maryland. Gayle Cicero, EdD, LCPC, is a certified school counselor and serves as an assistant clinical professor at Loyola University Maryland. Elizabeth Burton is a certified professional school counselor for Baltimore County Public Schools. Correspondence may be addressed to Jennifer Watkinson, Timonium Graduate Center, 2034 Greenspring Dr., Lutherville-Timonium, MD 21093, jswatkinson@loyola.edu.

Experience of Graduate Counseling Students During COVID-19: Application for Group Counseling Training

Bilal Urkmez, Chanda Pinkney, Daniel Bonnah Amparbeng, Nanang Gunawan, Jennifer Ojiambo Isiko, Brandon Tomlinson, Christine Suniti Bhat

 

The COVID-19 pandemic resulted in many universities moving abruptly from face-to-face to online instruction. One group of students involved in this transition was master’s-level counseling students. Their experiential group counseling training (EGCT) program started in a face-to-face format and abruptly transitioned to an online format because of COVID-19. In this phenomenological study, we examined these students’ experiences of participating and leading in six face-to-face and four online EGCT groups. Two focus groups were conducted, and three major themes emerged: positive participation attributes, participation-inhibiting attributes, and suggestions for group counseling training. The findings point to additional learning and skill development through the online group experience as well as its utility as a safe space to process the novel experience brought about by COVID-19.

Keywords: experiential group counseling training, phenomenological, COVID-19, face-to-face, online format

 

Most of what is known about group counseling and the training of group counselors has been learned from groups that occur in face-to-face group environments (Kozlowski & Holmes, 2014). This includes seminal works on group counseling’s therapeutic factors, such as universality, altruism, instillation of hope, cohesiveness, existential factors, interpersonal learning, self-understanding, and catharsis (Yalom & Leszcz, 2005). Researchers have found positive contributions of group therapeutic factors toward therapy outcomes (Behenck et al., 2017), and they have explored the experiences of group members in face-to-face group counseling settings, including the interpersonal and intrapersonal processes of members (Holmes & Kozlowski, 2015; Krug, 2009; Murdock et al., 2012). By contrast, there is considerably less research on online group counseling (Kozlowski & Holmes, 2014) or group counselors’ training in online modalities (Kit et al., 2014; Kozlowski & Holmes, 2017).

In this qualitative study, we utilized the phenomenological method to explore and compare master’s-level students’ experiences of participating in and leading during six face-to-face and four online experiential group counseling training (EGCT) groups as part of an introductory group counseling course. The master’s-level counseling students began their EGCT in face-to-face groups, and because of the COVID-19 pandemic, they continued to meet in four online groups after their university decided to suspend all face-to-face instruction.

Experiential Groups in Counselor Education
     Group counseling training is one of the eight core areas of required training for counselors stipulated by the Council for the Accreditation of Counseling and Related Educational Programs (CACREP; 2015). In order to learn the complex group processes necessary for effective group counseling, master’s-level counseling students are required to participate in EGCT (Association for Specialists in Group Work [ASGW], 2007; CACREP, 2015). For CACREP-accredited master’s programs, at least 10 clock hours of group participation during one academic semester are required (CACREP, 2015). During this experiential training, students learn to be both group counseling participants and group counseling leaders (Ieva et al., 2009) and gain valuable experience in and insight into group dynamics, group processes, and catharsis (Ohrt et al., 2014).

Master’s-level counseling students “benefit a great deal when allowed to develop practical and relevant clinical skills” (Steen et al., 2014, p. 236). Experiential training in group counseling also promotes self-awareness, personal growth, and a greater understanding of vulnerability and self-disclosure in the learners (Yalom & Leszcz, 2005). The experiential component of group counseling training provides an environment for counseling students to experience vicarious modeling, self-disclosure, validation, and genuineness from their classmates (Kiweewa et al., 2013). Finally, these experiential opportunities promote students’ self-confidence (Ohrt et al., 2014; Shumaker et al., 2011; Steen et al., 2014).

Online Counseling
     Barak and Grohol (2011) defined online counseling as “a mental health intervention between a patient (or a group of patients) and a therapist, using technology as the modality of communication” (p. 157). Counselors are increasingly using more digital modalities in their practice (Anthony, 2015; Richards & Viganó, 2013), and it is being seen as a viable alternative to support clients (Hearn et al., 2017). Since the start of the COVID-19 pandemic, counselors have begun to use more online modalities to provide counseling services (Peng et al., 2020). Online counseling began to emerge as a potential solution for mental health services when providers were forced to discontinue or scale down in-person services and adjust to virtual formats during the pandemic (Békés & Aafjes-van Doorn, 2020; Peng et al., 2020; Wind et al., 2020). Peng et al. (2020) noted the effects COVID-19 have had on the delivery of mental health services in China. They mentioned the governmental and authorities’ support for preparedness and response and the multidisciplinary enhancement of remote intervention quality for clients. They also suggested that governments should integrate the mental health interventions related to COVID-19 into existing public mental health emergency preparedness and response structures.

Because of the growing importance of online counseling, it is essential to train counseling students to conduct online counseling, including online group counseling, effectively. Understanding master’s students’ experiences in online EGCT can help identify potential challenges they may face during their training. It is also important to explore students’ experiences in face-to-face and online EGCT groups to better understand possible future training needs and help counselor educators create an educational curriculum that addresses group counseling knowledge and skills for online groups. There is currently a lack of information about how to train counseling students in the delivery of online counseling (Kozlowski & Holmes, 2014), and specifically group counseling (Kit et al., 2014).

Professional and Accreditation Bodies’ Guidance on Technology
     The American Counseling Association (ACA) Code of Ethics states, “Counselors understand that the profession of counseling may no longer be limited to in-person, face-to-face interactions” (2014, p. 17). The ASGW Best Practices Guidelines require that “Group workers are aware of and responsive to technological changes as they affect society and the profession” (ASGW, 2007, p. 115, A.9). Similarly, CACREP (2015) indicates “students are to understand the impact of technology on the counseling profession” (2.F.1.j) as well as “the impact of technology on the counseling process” (2.F.5.e). CACREP also emphasized that students understand “ethical and culturally relevant strategies for establishing and maintaining in-person and technology-assisted relationships” (2.F.5.d). Additionally, the Association for Counselor Education and Supervision (ACES; 2018) provides guidelines for online instruction featuring descriptions regarding course quality, content, instructional support, faculty qualifications, course evaluation procedures and expected technology standards.

Online Group Counseling
     Textbooks on group counseling have mainly approached EGCT in face-to-face formats (e.g., G. Corey, 2016; Yalom & Leszcz, 2005). Given the growing interest and demand for online counseling in recent years (Holmes & Kozlowski, 2015; Kozlowski & Holmes, 2017), COVID-19 has highlighted the need for greater awareness and understanding of online group counseling training. However, there is limited research on online group counseling and counseling students’ training in online group counseling.

Kozlowski and Holmes (2014) explored master’s-level counseling students’ experience in an online process group, reporting themes of participants’ experiences of a linear discussion, role confusion, and feelings of being disconnected, isolated, and unheard. In 2015, Holmes and Kozlowski expanded on their work with a study on master’s-level counseling students’ experiences in face-to-face and online group counseling training. They found that the online group participants felt significantly less comfortable than participants in the face-to-face group. Further, participants in the study evaluated face-to-face groups as preferable for participation, social cohesion, and security (Holmes & Kozlowski, 2015). Lopresti (2010) compared students’ group therapy experiences between face-to-face and online group counseling methods using synchronous text-based software. This research involved six master’s-level students engaging in an 8-week, 60-minute, weekly online group counseling session using the WebCT chat system. Results indicated that in the online format, some participants reported self-disclosure more easily, but they also shared that it was easy to hide behind the screen and to censor themselves.

Effectiveness of Online Group Counseling
     Some researchers have observed the efficacy of online support groups (Darcy & Dooley, 2007; Freeman et al., 2008; Lieberman et al., 2010; Webb et al., 2008). Haberstroh and Moyer (2012) reported that professionally moderated online support groups could supplement face-to-face counseling, especially for clients who want regular daily support during the process of recovering from self-injury. They also found that online group interaction provided clients with opportunities to engage in healthy self-expression and reduce their sense of loneliness and isolation (Haberstroh & Moyer, 2012). King et al. (2009) examined the effectiveness of internet-based group counseling to treat clients with methadone substance abuse, reporting that internet-based group counseling could reduce resistance and non-adherence in clients. Clients expressed satisfaction with the process and reported convenience and higher levels of trust in confidentiality because they were able to participate from home.

Similarly, Gilkey et al. (2009) reported the advantages and disadvantages of synchronous videoconferencing (SVC) web-based interventions. This study involved families with children with traumatic brain injury. The results revealed that SVC had the potential for family-based therapy delivery. However, it required important factors such as client readiness to address their issues and patience with the technology’s imperfections. SVC could reduce barriers to treatment with motivated families from diverse backgrounds. Nevertheless, the online group experience is vulnerable to the impact of technology glitches, privacy issues, disruptions in connectivity, and personal detachment (Amulya, 2020). In online group therapy, Weinberg (2020) identified four obstacles: managing the frame of the treatment, the disembodied environment, the question of presence, and the transparent background.

Purpose of Study and Research Questions
     In March 2020, as a result of the pandemic, our university moved most face-to-face classes to virtual environments following statewide restrictions for in-person gatherings. This sudden change led to a unique experience for first-year master’s-level counseling students enrolled in an introductory group counseling course at a CACREP-accredited program in the Midwest. It was planned that students would participate in 10 face-to-face EGCT groups of 90 minutes each to fulfill the CACREP (2015) group counseling experiential training requirements. Doctoral students facilitated the first five group counseling experiences for the counselors-in-training. The plan was for two master’s students to lead face-to-face groups under the supervision of doctoral students for the remaining five groups (6–10). However, the university closed for 2 weeks after Session 6 was completed. As a result, when classes resumed, they were online. EGCT Sessions 7 through 10 were conducted online using Microsoft Teams with master’s students leading and doctoral students supervising. Thus, in a single semester, the master’s students had the experience of participating in and leading both face-to-face and online groups. Our study was guided by the following research question: What were master students’ experiences of participating and leading in both face-to-face and online EGCT groups?

Methods

Research Design
     Qualitative methodology was used to explore first-year master’s students’ experiences of participating and leading in both face-to-face and online formats of EGCT. Our aim was to build an understanding of their experience shifting to an online modality with a specific interest in their attitudes, learning, facilitating, and adaptation to these two environments. For this purpose, a phenomenological approach was appropriate for investigating students’ unique experiences in both versions of the EGCT groups. Moustakas (1994) defined phenomenology as an approach for “comprehending or having in-depth knowledge of a phenomenon or setting and . . . attained by first reflecting on one’s own experience” (p. 36). In a phenomenological study, the aim is to describe the essence of individuals’ experiences with a certain phenomenon (Creswell & Creswell, 2018).

Participants and Procedures
     IRB approval was obtained, and purposive sampling was implemented with a recruitment email. All participants were recruited from a CACREP-accredited counseling program in the Midwest  United States. Our inclusion criteria were that participants must be current master’s-level counseling students and must be enrolled in a group counseling course. In addition, each participant must have experienced both participating in and leading at least one EGCT session during the prior term.

The invitation to participate in a focus group was emailed to all students enrolled in the group counseling course in the prior term. It included information about the study, addressed voluntary participation, and explained the entirely separate nature of participation in the focus group from evaluation of performance in the group class that had concluded. This recruitment email was sent out a total of three times within a 3-week period before the study was conducted.

Nine students agreed to participate in the study, and written consent forms were sent to them via email to read and review. Of the nine participants, three self-identified as male and six self-identified as female. Seven participants identified as White and two identified as “other,” and the age range was 18–34 years old. Two participants were specializing in school counseling, three in clinical mental health counseling, three in clinical mental health/clinical rehabilitation counseling, and one in clinical mental health/school counseling.

Before the focus group, prospective participants were emailed a copy of the semi-structured interview questions to alleviate any anxiety or concerns about the questions that would be asked during the study. Prospective participants were also invited to ask any questions at the start of the focus group and were then invited to provide verbal consent. To secure confidentiality, participants were assigned a code consisting of letters and numbers to protect their identity. Participants’ identification codes, with corresponding names, were kept securely in the possession of the first author, Bilal Urkmez.

Focus Groups
     Focus groups were used because they allow students to share their experiences with EGCT groups and compare points of view (Krueger & Casey, 2014). Two online focus groups were held—one with five participants (one male, four females) and one with four participants (two males, two females). Participants received invitation links from the focus group facilitator via Microsoft Teams. All participants were familiar with Microsoft Teams because they had used it for their experiential groups and classes after moving to online instruction. Urkmez contacted the university’s IT department regarding the protocol of recording and securing the video and audio of the focus groups on Microsoft Teams.

Our fifth and sixth authors, Jennifer Ojiambo Isiko and Brandon Tomlinson, who led and supervised the original EGCT groups, conducted the focus groups. Care was taken to ensure that master’s students were not placed in a focus group led by the same doctoral student who had previously led and supervised their 10-session EGCT groups.

We used Krueger and Casey’s (2014) guidelines to create a semi-structured focus group protocol. Open-ended questions were built in for the focus group leaders to use as prompts to facilitate discussion when necessary. The online focus groups lasted approximately 60 minutes. All the conversations were recorded and then transcribed verbatim by the designated focus group facilitator.

Authors’ Characteristics and Reflectivity
     Our research team consisted of two counselor educators with experience teaching and facilitating group counseling courses and five counselor education doctoral students. All doctoral students were part of a single cohort, and all had prior experiences facilitating group counseling. The counselor educators were Urkmez, who self-identifies as a White male, and Christine Suniti Bhat, an Asian female. The doctoral students were Chanda Pinkney, an African American female; Daniel Bonnah Amparbeng, an African male; Nanang Gunawan, an Asian male; Isiko, an African female; and Tomlinson, a White male. Before data collection, we met to discuss focus group questions, explore biases and assumptions, and assign focus group leaders for the study.

Our team used multiple strategies to establish trustworthiness. As two of the researchers taught group counseling and five of the researchers had led and supervised the EGCT groups, it was necessary to discuss possible biases before and during the data analysis process to ensure that the resulting themes and subthemes emerged from participants’ responses (Bowen, 2008).

First, some of the researchers shared that they believe face-to-face group counseling is better than online group counseling because they do not personally like to take or teach online courses in their education. All research members taught, learned, and supervised EGCTs predominantly in face-to-face environments prior to the study and pandemic. Secondly, some of the researchers also mentioned their frustrations with learning and supervising online. These discussions were held to promote awareness of potential biases so as to avoid focusing on the negative experiences of the master’s students. Bracketing was implemented throughout the study to reduce researchers’ possible influence on participants of favoring face-to-face counseling environments (Chan et al., 2013). This measure helped ensure the validity of the study’s data collection and analysis by having the researchers put aside any negative experiences of online learning environments during the pandemic (Chan et al., 2013). Urkmez, Pinkney, Bonnah Amparbeng, Gunawan, Isiko, and Tomlinson analyzed the data first, fulfilling investigator triangulation (Patton, 2015). This same group then met several times to discuss their analyses of the transcripts and agree upon the significant statements and themes.

Experiential Group Counseling Training
     Twenty-eight first-year master’s students were enrolled in an introductory group counseling course in the spring 2020 academic semester. The EGCT groups were a required adjunct to the didactic portion of the course. EGCT sessions for the master’s students met weekly for 90 minutes and were set up so that the master’s students were participants for Sessions 1 through 5 (led by doctoral students) and were leaders for Sessions 6 through 10 (supervised by doctoral students). All 10 sessions were planned to be face-to-face sessions. Doctoral students were enrolled in an advanced group counseling course, and their participation was a required component of the course.

During the first five sessions, doctoral students’ responsibilities as leaders included facilitating meaningful interaction among the participants, promoting member–member learning, and encouraging participants to translate insights generated during the interaction into practical actions outside the group (G. Corey, 2016). For Sessions 6–10, in the role of supervisors, doctoral students’ responsibilities were to mentor and monitor the master’s students’ group leadership skills and provide verbal feedback immediately after the session. Doctoral students also provided written feedback to both the master’s students and group counseling course instructors. Additionally, the doctoral students engaged in peer supervision with each other under the tutelage of the advanced group counseling course instructor, discussing how EGCT could be supervised more effectively.

As stated previously, two master’s students started to co-lead the EGCT groups during Session 6, which was conducted face-to-face. After Session 6, in-person classes were canceled by the university in response to COVID-19, so the remaining four sessions of EGCT were conducted online on Microsoft Teams. The online groups were conducted synchronously on the same day and time as the face-to-face groups had been conducted in the earlier part of the semester.

Session 7 was the first synchronous online session of the EGCT and deserves special mention. Prior to Session 7, the doctoral students received brief training on Microsoft Teams. The master’s students had no previous exposure to Microsoft Teams. Thus, during Session 7, the doctoral students provided support by demonstrating how Microsoft Teams worked and processing the master’s students’ thoughts, feelings, and levels of wellness in relation to the sudden pandemic. Students resumed leading the online synchronous groups for Sessions 8, 9, and 10 under doctoral students’ supervision.

Data Analysis
     Isiko and Tomlinson led the two focus groups and transcribed the data collected from the participants who shared their experiences in the focus groups. We utilized the phenomenological data analysis method described by Moustakas (1994). Urkmez, Pinkney, Bonnah Amparbeng, Gunawan, Isiko, and Tomlinson conducted the data analysis while Bhat served as a peer debriefer because of her position of seniority in terms of expertise in not only qualitative methodology, but also group counseling research, as well as her experience of more than 15 years in teaching both master’s- and doctoral-level group counseling courses at the CACREP-accredited program. Her primary role was to read the transcripts, review the raw data and analysis, and scrutinize established themes to point out discrepancies (Creswell & Creswell, 2018).

Our research team (except for Bhat) met to discuss our potential biases and bracket our assumptions about the phenomenon under investigation. Then, each of us independently read all transcripts multiple times to become familiar with the data. Next, we reviewed the transcripts according to the horizontalization phase of analysis (Moustakas, 1994). Moustakas defined the horizontalization phase as the part of the analysis “in which specific statements are identified in the transcripts that provide information about the experiences of the participants” (Moustakas, 1994, p. 28). During this step, we independently reviewed each transcript and identified significant statements that reflected the participants’ interpretations of their experiences with the phenomenon. We identified these significant statements based on the number of times they were mentioned both within and across participants. From this point, we each independently created a list of significant statements.

Subsequently, we met to review our lists to establish coder consistency, create initial titles for the themes, and place data into thematic clusters (Moustakas, 1994). Each of our themes and related subthemes were similar in content and typically varied only in the titles used. Titles for themes and subthemes were discussed until consensus was obtained. We revisited the horizontalized statements and discussed our different perspectives. Next, we evaluated the most commonly occurring themes and created a composite summary of each theme from the participants’ experiences. After these steps, we arrived at a consensus about each theme’s essential meaning and decided on specific participant quotes that represented each theme.

Findings

We identified three main themes related to the participants’ experiences of taking part in and leading both face-to-face and online EGCT. The three main themes were positive participation attributes, participation-inhibiting attributes, and suggestions for group counseling training.

Positive Participation Attributes
     The central theme of positive participation attributes focused on exploring master’s students’ perceptions about what helped them actively participate in both online and face-to-face EGCT groups as a group member. Five subthemes were identified in the main theme of positive participation attributes: (a) knowing other group members, (b) physical presence, (c) comfortability of online sessions, (d) cohesiveness, and (e) leadership interventions.

Knowing Other Group Members
     The EGCT group involved graduate-level counseling students who knew each other for a semester before engaging in the EGCT. Study participants shared that seeing familiar faces provided a safe and supportive environment for them to participate in both face-to-face and online group sessions as a group member. One participant noted that “a part of it helped because it was many people I had already known,” and another participant stated that “it was easier to have face-to-face after we had already kind of met everybody in the semester and so I wasn’t worried about confidentiality. I wasn’t in this group with a whole bunch of strangers.” Participants noted that knowing other group members helped them to participate actively in EGCT. They reported that having familiar faces in the group made them feel comfortable and connected, and that it helped them engage more fully during the ECGT groups.

Physical Presence
     Study participants shared that group members’ physical presence during the face-to-face sessions enhanced their willingness to participate. The physical presence provided access and a better ability to understand group members’ content and emotion through their body language, eye contact, vocal tone, and other nonverbal cues during sessions. As one participant shared, “I feel so much more in touch and present with people when I can see them, but just kind of feel their physical presence rather than just watching the faces online.” Furthermore, the study participants shared that being physically present during the face-to-face sessions allowed for the incorporation of more icebreaker activities by both doctoral and master’s student group leaders, enhancing their participation in groups. One participant noted that “the small icebreakers, I just remember doing those at the beginning during our face-to-face sessions; those were a lot of fun.”

Comfortability of Online Sessions
     Participants reported that they felt comfortable engaging in online EGCT from their familiar surroundings at home. They appreciated the convenience of participating in ECGT groups from wherever they were. One participant reported that “people could be outside or eating or drinking or whatever, which I think is cool.” Another participant shared that before the state-issued quarantine, they already used online technology to communicate with friends, so it was easy to use Microsoft Teams for online experiential training groups. Another participant noted:

We were doing them (EGCT) from the comfort of our own home; it just increased how comfortable you were in general. We were all at home, rocking in sweatpants and not having to worry about stuff. I feel we were in our own comfortable, safe space, and that made the online easier for me.

Cohesiveness
     Participants reported they felt “anxious,” “lonely,” and “isolated” and experienced other difficulties during the COVID-19 pandemic. They noted that they actively engaged in online EGCT sessions because it provided them with the opportunity to connect, share, and process their thoughts and emotions. A group participant reported, “We all had to isolate. [It] made it exciting to be able to connect with everyone again, to talk about how it (COVID-19) was affecting us, to vent out our emotions and check in with others.” Additionally, another participant reported:

When we started these sessions [online], it was at the beginning of these COVID-19 issues, and I was feeling more stressful, and there was nothing to do. It was so difficult to adjust to this environment, even staying at home. This was like an opportunity for me to connect with classmates in the group and [it] helped me to reflect on my anxiety and how other people were thinking around these COVID-19 issues.

     As a result of the online EGCT groups, participants gained a means of personal interaction during isolation. The subthemes presented above capture the positive participation factors that helped participants to engage actively in both online and face-to-face sessions.

Leadership Interventions
     Participants shared leadership interventions that helped them to participate during face-to-face and online sessions. The sudden transition to online groups due to COVID-19 was characterized by trial and error and uncertainty for everyone. Participants noted that while working with the new online EGCT group and different processes than what they experienced before COVID-19, doctoral students and master’s student leaders demonstrated a sense of flexibility and adaptability to the prevailing situation and could steer the groups in the changing environment. Both the doctoral and master’s student leaders were aware of the effect of COVID-19 on the participants, and they allowed the participants to get support from each other before they could get into the session plan for the group. One participant mentioned that “we kind of partly used that [the group] as a social support group . . . and reflect on how we’re feeling during social isolation.” Another participant shared that “the facilitators were flexible. So, even if they had a topic or something like that, they would allow for flexibility, to check in [with participants], and be able to kind of shift focus to what we all needed.”

Participants explicitly mentioned that the doctoral and master’s students’ leadership interventions, such as encouraging, checking in, and being present, helped them engage in the EGCT groups. Participants highlighted the strength of the group leaders’ encouragement of reflection (“I appreciated that the leader really put emphasis on encouraging us to answer questions”) and overall presence and attention (“[The leader] was attending our behavior and was really good with reflecting”). The participants also found the aspect of “checking in” by the leaders as something that enhanced their participation: “The leaders were always pretty quick to check in on someone if something seemed off.”

Group leaders’ ability to coordinate and successfully facilitate group sessions can significantly influence group outcomes (G. Corey, 2016; Gladding, 2012). Study participants shared that group facilitators demonstrated leadership skills and techniques to facilitate meaningful discussions and participation among members in both face-to-face and online sessions: “Like she [group leader] was always there to answer questions if there is silence; like she didn’t want us to rely on her to do the entire conversation, so her encouragement was beneficial for me.”

Participation-Inhibiting Attributes
     For this main theme, we examined attributes that negatively influenced participation and leading in the online and face-to-face formats of the EGCT groups. Three subthemes were identified: (a) group dynamics, (b) challenges with online EGCT, and (c) technological obstacles for online EGCT. The most prominent subtheme that arose and spread across both group formats was that of the group dynamic. Friction within the group dynamic was one of the primary issues reported by participants. The remaining subthemes were related to challenges with online EGCT groups. These challenges include the importance of “being with” or physically present with the rest of the group, problems with missing nonverbal communication in the online meetings, difficulties navigating awkward silences and pauses in the group, and technical obstacles.

Group Dynamics
     Study participants shared that the group dynamics dictated how much of a connection developed among group members and significantly influenced the progression to the working phase in the groups. In the words of one participant, “I feel like that was definitely something with our group dynamic. . . . There was definitely still good conversations, but I think that impacted it.”

Some participants reported their initial concerns about fostering rapport with group mates chosen randomly for them. Participants expressed thoughts that personalities did not mesh well in their group and that there were issues of building good rapport. Some participants indicated that having a reserved personality made it hard to participate: “For me, it was more about a personal thing because I am an introverted personality, so I find it difficult to talk in groups anyway, so that’s what hindered my participation sometimes.” Another participant stated: “I felt like the others protect themselves by not talking, so why should I open myself and put myself into risk? I thought about that.”

Challenges With Online EGCT
     Participants in this study emphasized that one of the main difficulties of the online EGCT experience that affected their participation and leadership negatively was missing body language and physical cues. Participants shared that they could use nonverbal cues and body language to know when it was a good time to speak without interrupting other group members during the face-to-face ECGT. Because these were missing in online EGCT, the students did not have immediate awareness to participate in group conversation without interrupting other group members. For example, one participant noted the difficulties of “just not being able to read body language as well and not being able to see everyone at once.” As a result of these online environment limitations, study participants indicated they had a sense of “stepping on toes” while trying to participate in online EGCT: “I think that one of the biggest challenges with doing it [EGCT] online is that you want to be respectful and make sure that you are not gonna talk over somebody else.”

Kozlowski and Holmes (2014) previously noted that the unfamiliar environment of online counseling, the time delay because of technology, and the inability to utilize group members’ body language can all create a one-dimensional or “linear” experience in online group counseling environments. These factors appeared to hinder the natural growth and development of the EGCT groups in our study as well. In an effort to reduce the perception of being rude, there were times of awkward silence as participants avoided constant interruptions during the sessions; this difficulty gave the feeling of a linear environment.

One other factor the participants noted in the online format more so than the in-person group was what students described as an awkward silence. This occurrence serves as a subtheme of missed physical cues because the participants noted that the lack of said cues complicated determining when to speak and when to wait: “Online, the silence almost felt like it was much longer than what it really would have been if it was face-to-face.” Another participant stated that they “feel pretty comfortable with silences, but it’s a lot harder to gauge that when it’s online.” This issue presented itself in several circumstances, though one group did attempt to figure out a solution, per the report of one participant: “For our group . . . to help with people talking over each other, we had people type in a smiley face in the chat when they wanted to share.”

Notably, participants in this study also mentioned that there was some physical presence that they could not describe but found to be relevant to them in their connection with the group. Although students were unable to identify it precisely, several study participants agreed on its importance. One participant said that they “enjoy the voice and the video, but I feel like when we are talking, especially in a group dynamic and group processes, especially to grasp something important, I really need to be with this person in a physical space.”

The participants emphasized the importance of physical presence, from the ability to see and greet one another to having space to do activities that got them up and moving. Many participants mentioned some intangible quality they could not name but that was missing when the groups convened electronically instead of in person. A participant shared that “you can observe the body language—what is happening in the group actually, but in online sessions, it’s like you don’t know, you are just talking.”

As noted in other sections, the group members appreciated the space for doing activities together when they were in person. Master’s student group leaders reported that they felt anxious when facilitating icebreaker activities in their online EGCT sessions because of the missing physical presence and noted the loss of face-to-face icebreakers. Study participants lamented that the online format did not allow for these bonding and icebreaking exercises, which when utilized in the usual face-to face format tended to put them in a position to feel better equipped to share with their group members, almost like a metaphorical entryway to the group process: “Some of the exercises are not possible to execute [online] because we were doing some physical things in our group, like throwing balls to each other and stuff.” Without these social warm-ups, the group flow and process suffered; according to those in the focus group, leaders needed more assistance to run activities in online EGCT sessions. One participant added a similar sentiment: “How do we lead a group online with proximity activities or icebreakers we would use? We can’t really do [that] because of the virtual interaction, [it] can’t work.”

Overall, the online EGCT environment limited the interpersonal relationships of the EGCT members and group leaders. Group members could not use their nonverbal communication skills or participate in physical group activities. Lastly, online EGCT appears to provide added pressure on group leaders to keep members engaged during the session. Master’s students had to choose topics where all members felt comfortable enough to participate with minimal encouragement, which was a challenge.

Technological Obstacles for Online EGCT
     Participants reported some technological difficulties that inhibited their ability to participate and lead the online EGCT sessions. Some participants noted that when participants turned off their cameras, it exacerbated disengagement levels within the group and hampered group dynamics. Some speculated that technical difficulties might be an excuse to disengage from the group: “Like in online, I can be mute, I can turn off my camera, I can not talk, and I can accuse the technology for that.” This capacity to disengage negatively impacted the group for several of the focus group participants, who noted that they felt this closed off the group and circumvented the ability to engage with all members of the group.

The limitations of the university-sanctioned online platform used for the EGCT groups, Microsoft Teams, adversely affected engagement during the online sessions as it only allowed four members (at the time of the online EGCT sessions) to be seen on the screen at a time. As one participant stated, “I cannot see all the group members . . . my attention is not with all members. This was difficult. It was difficult to lead the group.” Several group members were vociferous in their dislike for this limitation of the platform. Further, internet connectivity issues were problematic: “Sometimes like a group member would disconnect [because of technology problems], and there would be several minutes before they could come back.” These types of interruptions were frustrating to all group members and group leaders. Master’s student group leaders had a difficult time leading with interruptions.

One focus group participant noted, and others agreed, that it was challenging to learn how to lead a group online because they were missing so many elements of the in-person process of leading a group, and they did not have previous group leadership experience in an online environment. A participant shared that “it’s hard [leading group online]. It’s maybe harder for leaders because they cannot observe what’s going on . . . like body language.” 

Suggestions for Group Counseling Training
     Participants were invited to share their concerns and ways to develop and improve face-to-face and online EGCT group experiences. Three subthemes were identified: (a) software issues and training, (b) identified group topics, and (c) preferred EGCT environment.

Software Issues and Training
     Participants shared common concerns about the software for their online experiential training groups. Specifically, they found Microsoft Teams’ display of only four people at one time prevented them from seeing all group members on the screen. Members who were not speaking were displayed at the bottom of the computer screen with their profile picture or initials, which was not conducive to interaction. One participant suggested that they should “probably just use Zoom instead . . . I like Zoom better, seriously, because I can see absolutely everyone.” Another participant agreed, “But for the reason, at least, in Zoom, I can see everyone’s faces, not, um, not just four.”

Another participant similarly emphasized the importance of seeing everyone on one screen during their meeting: “If you don’t see the faces [at one time], you’re just clueless. I mean, have to, like, awkwardly check in with this person all the time.” Participants also brought another suggestion about training on leading online experiential training groups. Participants shared their anxiety about leading groups using online software because it is a new and unique experience. Because of the sudden onset of COVID-19, the students did not have a chance to get training on how to lead online experiential training groups. A participant mentioned that having training where students could learn how to facilitate online groups before leading weekly sessions would help alleviate anxiety and build competence: “Perhaps allowing a small period where everyone kind of gets adjusted to it and becomes more familiar with it might help facilitate [online] group sessions better.”

Identified Group Topics
     Another suggestion by participants regarding their EGCT experience was using one selected topic for each group. For example, a participant shared: “I think part of what was hard about this that might be something to change is, like having the group just be all over the place in terms of topics from week to week.” Another participant added: “If the group was more, like, a little bit more specific and clearer about like, the goal, or something like that, that might be—might help it flow a little bit better.” Some participants also suggested allowing students to select which group they wanted to attend, instead of having groups pre-assigned to them. In other words, participants preferred to join a specific group based on their interests. A participant mentioned: “I think that would be like a really good option to give like a list of ten types of groups or topics in the groups.” Another participant similarly suggested “giving an opportunity to all students to choose one group. For example, like the one group would work specifically on self-esteem problems or the other one would work on grief problems.”

Some participants noted that they felt there was a lack of purpose for the group, indicating that they were not sure of the group’s goals or objectives and that this hindered their ability to participate fully. Some also shared having confusion about their role and the boundaries of the group and what they could or could not share. One participant noted: “In the first session when we were trying to set up our goals, it was difficult for us to find what the goals will be as a group leader candidate, or as a person.” The focus group participants suggested giving more concrete topics overall for the EGCT group to understand better how to participate. This notion spanned across the online and face-to-face format as a more general recommendation.

Preferred Training Environment
     Lastly, participants were asked about their preference for participation in a face-to-face or online EGCT experience, if given a choice. Even though participants reported a reasonably good experience with online EGCT groups, such as comfortability and cohesiveness, most of the participants voiced a preference for face-to-face sessions if they had to do the group counseling training over again. One participant stated: “Ultimately, face-to-face will probably still be better.” Another participant added: “Face-to-face for sure. I just think as like a profession, we all enjoy working with people. We would prefer to work with someone in person.” Similarly, another participant mentioned: “I would definitely choose face-to-face, but I was thankful that we had the opportunity to do it online.”

Asking the participants about their preferred experiential training group environment garnered the most reaction during the interviews. Most of the participants shared that they preferred face-to-face groups. Even though participants had personal connections in an online setting, they wanted to have face-to-face meetings to interact better. One participant mentioned that “we are doing online sessions right now. I wish that I [could] continue to do the group lab and connect with the group members, but if I have the opportunity to take face-to-face, absolutely, I would do that.” Lastly, another participant added: “Absolutely, it’s face-to-face, but if we are in a situation like this, COVID-19 issues, sometimes the online sessions can be helpful.”

Participants offered their perspectives on learning group counseling skills during the global COVID-19 pandemic. Despite the unprecedented circumstances, the students persevered and completed the course. Group leaders and professors encouraged the group members to participate to the best of their abilities. The concerns and suggestions shared in these focus groups could help counselor educators plan and develop for EGCT in both online and face-to-face formats.

Discussion

This study investigated the experiences of master’s students in an online and face-to-face EGCT group. EGCT is an essential aspect of novice counselors’ preparation and is required by CACREP (2015) standards. In this study, participants identified positive factors related to their EGCT group participation, such as knowing other group members, group leadership skills, physical presence, and connection with other group members. They also reported participation-inhibiting factors such as the complexities of group dynamics, missing physical cues, and technological challenges. Our research findings are similar to Kozlowski and Holmes’s (2014) study on online group counseling training. Their participants reported problems with the group feeling artificial, lacking attending skills, and difficulties with achieving cohesion and connectedness.

In the current study, course instructors and student leaders did not have control over the choice of an online platform. The limitations of Microsoft Teams, which at the time of the online EGCT sessions only allowed four participants to be visible on the screen at one time, added to difficulties with engaging and feeling connected. For participants to remain engaged, leaders and instructors should have access to online platforms that allow students to see all group members simultaneously on the screen. Setting ground rules requiring that cameras remain on during sessions and utilizing the chat feature or the hand-raising feature to facilitate discussions would also help create and maintain a sense of connection. Outlining contingency plans such as the alternatives for not being able to join the group with the camera on are important for successful group outcomes.

Participants in this study appreciated the convenience of participating in online ECGT groups. This is similar to the findings of King et al. (2009) about the convenience of access to online group counseling. In the same study by King et al. (2009), the participants shared that online counseling sessions allowed them to participate from the comfort of their homes, thus improving both convenience and privacy. One of the difficulties participants reported was that of awkward silence. This experience, coupled with interruptions (“stepping on toes”), resulted in students finding that the experience online was more linear and less organic compared to face-to-face interactions. These findings are similar to those of Kozlowski and Holmes (2014). Yalom and Leszcz (2005) noted that the group leader’s role is to design the group’s path, get it going, and keep it functional to achieve effectiveness. Presence, self-confidence, the courage to take risks, belief in the group process, inventiveness, and creativity are essential leadership traits in leading groups (G. Corey, 2016). However, these traits are for in-person groups. It is possible that effectively leading online groups requires other skills that have not yet been identified. The sudden change to online training in this instance did not allow for a planful design. It is necessary for group leaders to possess specific group leadership skills and appropriately perform them to help group members participate in groups (M. S. Corey et al., 2018). However, participants appreciated that the doctoral and master’s student leaders demonstrated flexibility, allowing for additional time to check in with group members and process their experiences and emotions related to the pandemic.

One interesting finding related to how COVID-19 impacted participants’ experiences in the ECGT groups was that group participants actively engaged in the online sessions when they were allowed to process their anxiety and stress due to COVID-19, as it served as a support group. This result is dissimilar to findings of previous studies in which participants felt unsafe during online group sessions and being on online platforms impeded participants’ emotional connection and trust levels (Fletcher-Tomenius & Vossler, 2009; Haberstroh et al., 2007; Kozlowski & Holmes, 2014).

Bellafiore et al. (2003) emphasized online group leaders’ roles as “shaping the group” and “setting the tone.” They also expressed that “establishing and maintaining a leadership style is important in keeping the group going” (p. 211). In the current study, first-year master’s students, many of whom were participating in or leading groups for the first time, had the unexpected and sudden additional layer of learning how to lead online. Further, the abrupt transition from face-to-face to online groups because of COVID-19 did not allow for extensive instructor planning and preparation. Leading groups online was challenging and anxiety-provoking for members, as they lacked experience and were unsure how to proceed. Master’s students need additional training on facilitating online groups, establishing a leadership style, and managing silence. This information corresponds with Cárdenas et al.’s (2008) findings that master’s-level counseling students felt more confident to provide online counseling services after training.

Implications

Although the findings from this study are not generalizable, there may be implications for designing and leading EGCT groups that merit consideration based on the experience of the counselor trainees described in this study. Part of the group design entailed assigning different topics to focus on for each session. The rationale for having different topics for each session should be clearly explained to the participants. Any questions regarding the identified topics should be addressed early to enhance the group facilitation process for both leaders and participants. Additionally, group leaders or course instructors need to explain roles clearly, and group members should understand the group’s boundaries and how they fit with their didactic course.

With online EGCT groups, it is essential to consider how participation is influenced by a lack of natural communication signals, such as body language and physical presence. Counselor educators and EGCT student leaders need to establish ground rules about online group interactions such as having all cameras remain on during sessions, having a private and quiet space from which to participate, and minimizing distractions from pets or relatives, all of which are necessary for successful groups. Further, utilizing technology that allows all members to be seen on the screen may help build connection and cohesiveness. Utilizing methods such as using the chat to insert a symbol or using the hand-raising icon can also help facilitate participation.

Overall, students reported feeling unprepared to lead online counseling groups. However, as counselor educators, we are responsible for preparing our students to engage in online counseling successfully, especially as the COVID-19 pandemic continues into its second year and will continue to affect how much virtual counseling will take place in the future. The recent normalization of online counseling (individual and group) may persuade educators and counselors to “increase their skills in terms of development, comfortability, and flexibility in the online environment” (International OCD Foundation, 2020, p. 1). Therefore, counselor educators should cover online-specific facilitation skills in their training programs.

Limitations and Future Research Directions

This study was the first step in attempting to understand and describe master’s-level students’ experiences of participating and leading in both face-to-face and online formats of EGCT. As with all research, limitations should be considered in interpreting the findings. Further, some of the limitations point to potential research directions.

COVID-19 created a situation where the transition from face-to-face to online formats was compulsory. It is therefore not clear what the experience would have been like if the transition was planned and did not have a situation like COVID-19 in the background pushing the transition, or if the group had been entirely online. Because of unplanned adjustment, course instructors and student leaders did not have control over the choice of an online platform. Outlining contingency plans, such as alternatives when a group member cannot join the group with their camera on, are essential for successful group outcomes, and a lack of familiarity with online platforms may have prevented instructors and student leaders from providing these contingencies and therefore impacted the experience for students.

Further, the EGCT groups were conducted with master’s-level students, and participants already had preexisting relationships with each other. This may have contributed to their strong support of face-to-face groups over online groups. In future research, studies with participants who do not already know each other may help us assess the appeal of online groups to participants. Further, researchers in the future may wish to examine the efficacy of online group counseling training for counseling students compared to in-person group training by comparing two equivalent experiential groups.

The current study recruited master’s-level counseling students from a CACREP-accredited counseling program in the Midwest United States; thus, results cannot be generalized to other institutions. The sample size was small in the current study. Therefore, we caution against generalizing our findings. During the focus groups, participants shared some apprehension about how much information to disclose in group counseling, and they verbalized some confusion on group purpose, direction, or goals. For many, these EGCT groups were the students’ first experience in group counseling training, and this could contribute to them questioning if their feelings and experiences were appropriate (Ohrt et al., 2014).

There are methodological considerations to improve future studies. Focus groups were conducted to collect the data from the participants. In-depth individual interviews would enhance a deeper conversation in understating and reflecting on the challenges and needs of master’s-level students. Participants may have censored some of their true feelings, as they were aware that their group leaders were also part of the research team, even though they did not run the focus groups. We acknowledge that the students knowing each other from previous classes may have influenced how much they shared in groups. Participants in this study expressed comfort with knowing each other from a previous semester. However, it is also possible that students may have disclosed minimal personal information so as not to effect public perception of themselves or effect future professional relationships.

Another area to expand on would be investigating counselors’ self-efficacy while facilitating online counseling groups. For example, exploring positive participation attributes that increase online groups’ participation from the leader’s perspective could be useful. This may allow researchers and practitioners to identify how group counseling can best be leveraged in an online environment.

Conclusion

The purpose of this study was to explore and compare first-year master’s-level counseling students’ experiences of participating and leading in both face-to-face and online formats of EGCT. In summary, students considered that the online format was challenging because it added a layer of learning to their fledgling group work skills beyond the face-to-face setting. Technological barriers that were outside the control of participants inhibited their participation, but on the other hand, the online groups served as a safe and supportive space for students to alleviate their stress and loneliness due to COVID-19. Regardless of the teaching environment, thoughtful and well-planned EGCT groups are essential for student development in this area, and skilled group leaders can manage group dynamics and model group counseling skills. COVID-19 has necessitated a focus on teletherapy and online counseling. The group counseling profession should be proactive in addressing this training need, as conducting online group counseling sessions is likely to continue to be a much-needed skill in a post-pandemic world.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

American Counseling Association. (2014). ACA code of ethics. https://www.counseling.org/docs/default-source/default-document-library/2014-code-of-ethics-finaladdress.pdf?sfvrsn=96b532c_2

Amulya, D. S. L. (2020). An experiment with online group counseling during COVID 19. In L. S. S. Manickam (Ed.), COVID-19 pandemic: Challenges and responses of psychologists from India (pp. 182–197).

Anthony, K. (2015). Training therapists to work effectively online and offline within digital culture. British Journal of Guidance & Counselling, 43(1), 36–42. https://doi.org/10.1080/03069885.2014.924617

Association for Counselor Education and Supervision. (2018). ACES guidelines for online learning – 2017. https://acesonline.net/knowledge-base/aces-guidelines-for-online-learning-2017-2

Association for Specialists in Group Work. (2007). Association for Specialists in Group Work: Best practice guidelines. https://www.researchgate.net/publication/247784312_Association_for_Specialists_in_Group_Work_Best_Practice_Guidelines_2007_Revisions

Barak, A., & Grohol, J. M. (2011). Current and future trends in internet-supported mental health interventions. Journal of Technology in Human Services, 29(3),155–196. https://doi.org/10.1080/15228835.2011.616939

Behenck, A., Wesner, A. C., Finkler, D., & Heldt, E. (2017). Contribution of group therapeutic factors to the outcome of cognitive–behavioral therapy for patients with panic disorder. Archives of Psychiatric Nursing, 31(2), 142–146. https://doi.org/10.1016/j.apnu.2016.09.001

Békés, V., & Aafjes-van Doorn, K. (2020). Psychotherapists’ attitudes toward online therapy during the COVID-19 pandemic. Journal of Psychotherapy Integration, 30(2), 238–247. https://doi.org/10.1037/int0000214

Bellafiore, D. R., Colon, Y., & Rosenberg, P. (2003). Online counseling groups. In R. Kraus, J. Zack, & G. Stricker (Eds.), Online counseling: A handbook for mental health professionals (pp. 197–216). Academic Press.

Bowen, G. A. (2008). Naturalistic inquiry and the saturation concept: A research note. Qualitative Research, 8(1), 137–152. https://doi.org/10.1177/1468794107085301

Burlingame, G. M., McClendon, D. T., & Yang, C. (2019). Cohesion in group therapy. In J. C. Norcross & M. J. Lambert (Eds.), Psychotherapy relationships that work: Evidence-based therapist contributions (pp. 205–244). Oxford University Press.

Cárdenas, G., Serrano, B., Flores, L. A., & De la Rosa, A. (2008). Etherapy: A training program for development of clinical skills in distance psychotherapy. Journal of Technology in Human Services, 26(2–4), 470–483. https://doi.org/10.1080/15228830802102180

Chan, Z. C., Fung, Y., & Chien, W. T. (2013). Bracketing in phenomenology: Only undertaken in the data collection and analysis process. The Qualitative Report, 18(30), 1–9.
https://doi.org/10.46743/2160-3715/2013.1486

Corey, G. (2016). Theory and practice of group counseling (9th ed.). Cengage.

Corey, M. S., Corey, G., & Corey, C. (2018). Groups: Process and practice (10th ed.). Cengage.

Council for the Accreditation of Counseling and Related Educational Programs. (2015). CACREP 2016 standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.

Darcy, A. M., & Dooley, B. (2007). A clinical profile of participants in an online support group. European Eating Disorders Review, 15(3), 185–195. https://doi.org/10.1002/erv.775

Fletcher-Tomenius, L., & Vossler, A. (2009). Trust in online therapeutic relationships: The therapist’s experience. Counselling Psychology Review, 24(2), 24–33.

Freeman, E., Barker, C., & Pistrang, N. (2008). Outcome of an online mutual support group for college students with psychological problems. Cyberpsychology & Behavior, 11(5), 591–593.
https://doi.org/10.1089/cpb.2007.0133

Gilkey, S. L., Carey, J., & Wade, S. L. (2009). Families in crisis: Considerations for the use of web-based treatment models in family therapy. Families in Society, 90(1), 37–45. https://doi.org/10.1606/1044-3894.3843

Gladding, S. T. (2012). Groups: A counseling specialty (6th ed.). Pearson.

Haberstroh, S., Duffey, T., Evans, M. P., Gee, R., & Trepal, H. (2007). The experience of online counseling. Journal of Mental Health Counseling, 29(3), 269–282. https://doi.org/10.17744/mehc.29.3.j344651261w357v2

Haberstroh, S., & Moyer, M. (2012). Exploring an online self-injury support group: Perspectives from group members. The Journal for Specialists in Group Work, 37(2), 113–132.
https://doi.org/10.1080/01933922.2011.646088

Hearn, C. S., Donovan, C. L., Spence, S. H., & March, S. (2017). A worrying trend in social anxiety: To what degree are worry and its cognitive factors associated with youth social anxiety disorder? Journal of Affective Disorders, 208, 33–40. https://doi.org/10.1016/j.jad.2016.09.052

Holmes, C. M., & Kozlowski, K. A. (2015). A preliminary comparison of online and face-to-face process groups. Journal of Technology in Human Services, 33(3), 241–262. https://doi.org/10.1080/15228835.2015.1038376

Ieva, K. P., Ohrt, J. H., Swank, J. M., & Young, T. (2009). The impact of experiential groups on master students’ counselor and personal development: A qualitative investigation. The Journal for Specialists in Group Work, 3(4), 351–368. https://doi.org/10.1080/01933920903219078

International OCD Foundation. (2020, July 15). Teletherapy in the time of COVID-19. https://iocdf.org/covid19/teletherapy-in-the-time-of-covid-19

King, V. L., Stoller, K. B., Kidorf, M., Kindbom, K., Hursh, S., Brady, T., & Brooner, R. K. (2009). Assessing the effectiveness of an Internet-based videoconferencing platform for delivering intensified substance abuse counseling. Journal of Substance Abuse Treatment, 36(3), 331–338.

https://doi.org/10.1016/j.jsat.2008.06.011

Kit, P. L., Wong, S. S., D’Rozario, V., & Teo, C. T. (2014). Exploratory findings on novice group counselors’ initial co-facilitating experiences in in-class support groups with adjunct online support groups. The Journal for Specialists in Group Work, 39(4), 316–344. https://doi.org/10.1080/01933922.2014.954737

Kiweewa, J., Gilbride, D., Luke, M., & Seward, D. (2013). Endorsement of growth factors in experiential training groups. The Journal for Specialists in Group Work, 38(1), 68–93.
https://doi.org/10.1080/01933922.2012.745914

Kozlowski, K. A., & Holmes, C. M. (2014). Experiences in online process groups: A qualitative study. The Journal for Specialists in Group Work, 39(4), 276–300. https://doi.org/10.1080/01933922.2014.948235

Kozlowski, K. A., & Holmes, C. M. (2017). Teaching online group counseling skills in an on-campus group counseling course. Journal of Counselor Preparation and Supervision, 9(1).

Krueger, R. A., & Casey, M. (2014). Focus groups: A practical guide for applied research (5th ed.). SAGE.

Krug, O. T. (2009). James Bugental and Irvin Yalom: Two masters of existential therapy cultivate presence in the therapeutic encounter. Journal of Humanistic Psychology, 49(3), 329–354.
https://doi.org/10.1177/0022167809334001

Lieberman, M., Winzelberg, A., Golant, M., Wakahiro, M., DiMinno, M., Aminoff, M., & Christine, C. (2010). Online support groups for Parkinson’s patients: A pilot study of effectiveness. Social Work Health Care, 42(2), 23–38. https://doi.org/10.1300/J010v42n02_02

Lopresti, J. M. (2010). The process and experience of online group counseling for masters-level counseling students (Order No. 3451084). Available from ProQuest Dissertations & Theses A&I. (862058819).

Moustakas, C. (1994). Phenomenological research methods. SAGE.

Murdock, J., Williams, A., Becker, K., Bruce, M. A., & Young, S. (2012). Online versus on-campus: A comparison study of counseling skills courses. The Journal of Human Resource and Adult Learning, 8(1), 105–118.

Ohrt, J. H., Prochenko, Y., Stulmaker, H., Huffman, D., Fernando, D., & Swan, K. (2014). An exploration of group and member development in experiential groups. The Journal for Specialists in Group Work, 39(3), 212–235. https://doi.org/10.1080/01933922.2014.919047

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice (4th ed.). SAGE.

Peng, D., Wang, Z., & Xu, Y. (2020). Challenges and opportunities in mental health services during the COVID-19 pandemic. General Psychiatry, 33(5). https://doi.org/10.1136/gpsych-2020-100275

Richards, D., & Viganó, N. (2013). Online counseling: A narrative and critical review of the literature. Journal of Clinical Psychology, 69(9), 994–1011. https://doi.org/10.1002/jclp.21974

Shumaker, D., Ortiz, C., & Brenninkmeyer, L. (2011). Revisiting experiential group training in counselor education: A survey of master’s-level programs. The Journal for Specialists in Group Work, 36(2), 111–128. https://doi.org/10.1080/01933922.2011.562742

Steen, S., Vasserman-Stokes, E., & Vannatta, R. (2014). Group cohesion in experiential growth groups. The Journal for Specialists in Group Work, 39(3), 236–256. https://doi.org/10.1080/01933922.2014.924343

Webb, M., Burns, J., & Collin, P. (2008). Providing online support for young people with mental health difficulties: Challenges and opportunities explored. Early Intervention in Psychiatry, 2(2), 108–113. https://doi.org/10.1111/j.1751-7893.2008.00066.x

Weinberg, H. (2020). Online group psychotherapy: Challenges and possibilities during COVID-19—A practice review. Group Dynamics: Theory, Research, and Practice, 24(3), 201–211.
https://doi.org/10.1037/gdn0000140

Wind, T. R., Rijkeboer, M., Andersson, G., & Riper, H. (2020). The COVID-19 pandemic: The ‘black swan’ for mental health care and a turning point for e-health. Internet Interventions, 20.
https://doi.org/10.1016/j.invent.2020.100317

Yalom, I. D., & Leszcz, M. (2005). The theory and practice of group psychotherapy (5th ed.). Basic
Books.

 

Bilal Urkmez, PhD, LPC, CRC, is an assistant professor at Ohio University. Chanda Pinkney, MA, CT, is a doctoral student at Ohio University. Daniel Bonnah Amparbeng, MEd, NCC, LPC, is a doctoral student at Ohio University. Nanang Gunawan, MA, is a doctoral student at Ohio University. Jennifer Ojiambo Isiko, MA, is a doctoral student at Ohio University. Brandon Tomlinson, MA, NCC, LPC, is a doctoral student at Ohio University. Christine Suniti Bhat, PhD, LPC, LSC, is a professor at Ohio University. Correspondence may be addressed to Bilal Urkmez, Patton Hall 432P, Athens, OH 45701, urkmezbi@ohio.edu.

Enhancing Assessment Literacy in Professional Counseling: A Practical Overview of Factor Analysis

Michael T. Kalkbrenner

Assessment literacy is an essential competency area for professional counselors who administer tests and interpret the results of participants’ scores. Using factor analysis to demonstrate internal structure validity of test scores is a key element of assessment literacy. The underuse of psychometrically sound instrumentation in professional counseling is alarming, as a careful review and critique of the internal structure of test scores is vital for ensuring the integrity of clients’ results. A professional counselor’s utilization of instrumentation without evidence of the internal structure validity of scores can have a number of negative consequences for their clients, including misdiagnoses and inappropriate treatment planning. The extant literature includes a series of articles on the major types and extensions of factor analysis, including exploratory factor analysis, confirmatory factor analysis (CFA), higher-order CFA, and multiple-group CFA. However, reading multiple psychometric articles can be overwhelming for professional counselors who are looking for comparative guidelines to evaluate the validity evidence of scores on instruments before administering them to clients. This article provides an overview for the layperson of the major types and extensions of factor analysis and can serve as reference for professional counselors who work in clinical, research, and educational settings.

Keywords: Factor analysis, overview, professional counseling, internal structure, validity

Professional counselors have a duty to ensure the veracity of tests before interpreting the results of clients’ scores because clients rely on their counselors to administer and interpret the results of tests that accurately represent their lived experience (American Educational Research Association [AERA] et al., 2014; National Board for Certified Counselors [NBCC], 2016). Internal structure validity of test scores is a key assessment literacy area and involves the extent to which the test items cluster together and represent the intended construct of measurement.

Factor analysis is a method for testing the internal structure of scores on instruments in professional counseling (Kalkbrenner, 2021b; Mvududu & Sink, 2013). The rigor of quantitative research, including psychometrics, has been identified as a weakness of the discipline, and instrumentation with sound psychometric evidence is underutilized by professional counselors (Castillo, 2020; C.-C. Chen et al., 2020; Mvududu & Sink, 2013; Tate et al., 2014). As a result, there is an imperative need for assessment literacy resources in the professional counseling literature, as assessment literacy is a critical competency for professional counselors who work in clinical, research, and educational settings alike.

Assessment Literacy in Professional Counseling
Assessment literacy is a crucial proficiency area for professional counselors, as counselors in a variety of the specialty areas of the Council for Accreditation of Counseling and Related Educational Programs (2015), such as clinical rehabilitation (5.D.1.g. & 5.D.3.a.), clinical mental health (5.C.1.e. & 5.C.3.a.), and addiction (5.A.1.f. & 5.A.3.a.), select and administer tests to clients and use the results to inform diagnosis and treatment planning, and to evaluate the utility of clinical interventions (Mvududu & Sink, 2013; NBCC, 2016; Neukrug & Fawcett, 2015). The extant literature includes a series of articles on factor analysis, including exploratory factor analysis (EFA; Watson, 2017), confirmatory factor analysis (CFA; Lewis, 2017), higher-order CFA (Credé & Harms, 2015), and multiple-group CFA (Dimitrov, 2010). However, reading several articles on factor analysis is likely to overwhelm professional counselors who are looking for a desk reference and/or comparative guidelines to evaluate the validity evidence of scores on instruments before administering them to clients. To these ends, professional counselors need a single resource (“one-stop shop”) that provides a brief and practical overview of factor analysis. The primary purpose of this manuscript is to provide an overview for the layperson of the major types and extensions of factor analysis that counselors can use as a desk reference.

Construct Validity and Internal Structure

     Construct validity, the degree to which a test measures its intended theoretical trait, is a foundation of assessment literacy for demonstrating validity evidence of test scores (Bandalos & Finney, 2019). Internal structure validity, more specifically, is an essential aspect of construct validity and assessment literacy. Internal structure validity is vital for determining the extent to which items on a test combine to represent the construct of measurement (Bandalos & Finney, 2019). Factor analysis is a key method for testing the internal structure of scores on instruments in professional counseling as well as in social sciences research in general (Bandalos & Finney, 2019; Kalkbrenner, 2021b; Mvududu & Sink, 2013). In the following sections, I will provide a practical overview of the two primary methodologies of factor analysis (EFA and CFA) as well as the two main extensions of CFA (higher-order CFA and multiple-group CFA). These factor analytic techniques are particularly important elements of assessment literacy for professional counselors, as they are among the most common psychometric analyses used to validate scores on psychological screening tools (Kalkbrenner, 2021b). Readers might find it helpful to refer to Figure 1 before reading further to become familiar with some common psychometric terms that are discussed in this article and terms that also tend to appear in the measurement literature.

Figure 1

Technical and Layperson’s Definitions of Common Psychometric Terms
Note. Italicized terms are defined in this figure.

Exploratory Factor Analysis
EFA is “exploratory” in that the analysis reveals how, if at all, test items band together to form factors or subscales (Mvududu & Sink, 2013; Watson, 2017). EFA has utility for testing the factor structure (i.e., how the test items group together to form one or more scales) for newly developed or untested instruments. When evaluating the rigor of EFA in an existing psychometric study or conducting an EFA firsthand, counselors should consider sample size, assumption checking, preliminary testing, factor extraction, factor retention, factor rotation, and naming rotated factors (see Figure 2).

EFA: Sample Size, Assumption Checking, and Preliminary Testing
     Researchers should carefully select the minimum sample size for EFA before initiating data collection (Mvududu & Sink, 2013). My 2021 study (Kalkbrenner, 2021b) recommended that the minimal a priori sample size for EFA include either a subjects-to-variables ratio (STV) of 10:1 (at least 10 participants for each test item) or 200 participants, whichever produces a larger sample. EFA tends to be robust to moderate violations of normality; however, results are enriched if data are normally distributed (Mvududu & Sink, 2013). A review of skewness and kurtosis values is one way to test for univariate normality; according to Dimitrov (2012), extreme deviations from normality include skewness values > ±2 and kurtosis > ±7; however, ideally these values are ≤ ±1 (Mvududu & Sink, 2013). The Shapiro-Wilk and Kolmogorov-Smirnov tests can also be computed to test for normality, with non-significant p-values indicating that the parametric properties of the data are not statistically different from a normal distribution (Field, 2018); however, the Shapiro-Wilk and Kolmogorov-Smirnov tests are sensitive to large sample sizes and should be interpreted cautiously. In addition, the data should be tested for linearity (Mvududu & Sink, 2013). Furthermore, extreme univariate and multivariate outliers must be identified and dealt with (i.e., removed, transformed, or winsorized; see Field, 2018) before a researcher can proceed with factor analysis. Univariate outliers can be identified via z-scores (> 3.29), box plots, or scatter plots, and multivariate outliers can be discovered by computing Mahalanobis distance (see Field, 2018).

Figure 2

Flow Chart for Reviewing Exploratory Factor Analysis

 

Three preliminary tests are necessary to determine if data are factorable, including (a) an inter-item correlation matrix, (b) the Kaiser–Meyer–Olkin (KMO) test for sampling adequacy, and (c) Bartlett’s test of sphericity (Beavers et al., 2013; Mvududu & Sink, 2013; Watson, 2017). The purpose of computing an inter-item correlation matrix is to identify redundant items (highly correlated) and individual items that do not fit with any of the other items (weakly correlated). An inter-item correlation matrix is factorable if a number of correlation coefficients for each item are between approximately r = .20 and r = .80 or .85 (Mvududu & Sink, 2013; Watson, 2017). Generally, a factor or subscale should be composed of at least three items (Mvududu & Sink, 2013); thus, an item should display intercorrelations between r = .20 and r = .80/.85 with at least three other items. However, inter-item correlations in this range with five to 10+ items are desirable (depending on the total number of items in the inter-item correlation matrix).

Bartlett’s test of sphericity is computed to test if the inter-item correlation matrix is an identity matrix, in which the correlations between the items is zero (Mvududu & Sink, 2013). An identity matrix is completely unfactorable (Mvududu & Sink, 2013); thus, desirable findings are a significant p-value, indicating that the correlation matrix is significantly different from an identity matrix. Finally, before proceeding with EFA, researchers should compute the KMO test for sampling adequacy, which is a measure of the shared variance among the items in the correlation matrix (Watson, 2017). Kaiser (1974) suggested the following guidelines for interpreting KMO values: “in the .90s – marvelous, in the .80s – meritorious, in the .70s – middling, in the .60s – mediocre, in the .50s – miserable, below .50 – unacceptable” (p. 35).

Factor Extraction Methods
     Factor extraction produces a factor solution by dividing up shared variance (also known as common variance) between each test item from its unique variance, or variance that is not shared with any other variables, and error variance, or variation in an item that cannot be accounted for by the factor solution (Mvududu & Sink, 2013). Historically, principal component analysis (PCA) was the dominant factor extraction method used in social sciences research. PCA, however, is now considered a method of data reduction rather than an approach to factor analysis because PCA extracts all of the variance (shared, unique, and error) in the model. Thus, although PCA can reduce the number of items in an inter-item correlation matrix, one cannot be sure if the factor solution is held together by shared variance (a potential theoretical model) or just by random error variance.

More contemporary factor extraction methods that only extract shared variance—for example, principal axis factoring (PAF) and maximum likelihood (ML) estimation methods—are generally recommended for EFA (Mvududu & Sink, 2013). PAF has utility if the data violate the assumption of normality, as PAF is robust to modest violations of normality (Mvududu & Sink, 2013). If, however, data are largely consistent with a normal distribution (skewness and kurtosis values ≤ ±1), researchers should consider using the ML extraction method. ML is advantageous, as it computes the likelihood that the inter-item correlation matrix was acquired from a population in which the extracted factor solution is a derivative of the scores on the items (Watson, 2017).

     Factor Retention. Once a factor extraction method is deployed, psychometric researchers are tasked with retaining the most parsimonious (simple) factor solution (Watson, 2017), as the purpose of factor analysis is to account for the maximum proportion of variance (ideally, 50%–75%+) in an inter-item correlation matrix while retaining the fewest possible number of items and factors (Mvududu & Sink, 2013). Four of the most commonly used criteria for determining the appropriate number of factors to retain in social sciences research include the (a) Kaiser criterion, (b) percentage of variance among items explained by each factor, (c) scree plot, and (d) parallel analysis (Mvududu & Sink, 2013; Watson, 2017). Kaiser’s criterion is a standard for retaining factors with Eigenvalues (EV) ≥ 1. An EV represents the proportion of variance that is explained by each factor in relation to the total amount of variance in the factor matrix.

The Kaiser criterion tends to overestimate the number of retainable factors; however, this criterion can be used to extract an initial factor solution (i.e., when computing the EFA for the first time). Interpreting the percentage of variance among items explained by each factor is another factor retention criterion based on the notion that a factor must account for a large enough percentage of variance to be considered meaningful (Mvududu & Sink, 2013). Typically, a factor should account for at least 5% of the variance in the total model. A scree plot is a graphical representation or a line graph that depicts the number of factors on the X-axis and the corresponding EVs on the Y-axis (see Figure 6 in Mvududu & Sink, 2013, p. 87, for a sample scree plot). The cutoff for the number of factors to retain is portrayed by a clear bend in the line graph, indicating the point at which additional factors fail to contribute a substantive amount of variance to the total model. Finally, in a parallel analysis, EVs are generated from a random data set based on the number of items and the sample size of the real (sample) data. The factors from the sample data with EVs larger than the EVs from the randomly generated data are retained based on the notion that these factors explain more variance than would be expected by random chance. In some instances, these four criteria will reveal different factor solutions. In such cases, researchers should retain the simplest factor solution that makes both statistical and substantive sense.

     Factor Rotation. After determining the number of factors to retain, researchers seek to uncover the association between the items and the factors or subscales (i.e., determining which items load on which factors) and strive to find simple structure or items with high factor loadings (close to ±1) on one factor and low factor loadings (near zero) on the other factors (Watson, 2017). The factors are rotated on vectors to enhance the readability or detection of simple structure (Mvududu & Sink, 2013). Orthogonal rotation methods (e.g., varimax, equamax, and quartimax) are appropriate when a researcher is measuring distinct or uncorrelated constructs of measurement. However, orthogonal rotation methods are rarely appropriate for use in counseling research, as counselors almost exclusively appraise variables that display some degree of inter-correlation (Mvududu & Sink, 2013). Oblique rotation methods (e.g., direct oblimin and promax) are generally more appropriate in counseling research, as they allow factors to inter-correlate by rotating the data on vectors at angles less than 90. The nature of oblique rotations allows the total variance accounted for by each factor to overlap; thus, the total variance explained in a post–oblique rotated factor solution can be misleading (Bandalos & Finney, 2019). For example, the total variance accounted for in a post–oblique rotated factor solution might add up to more than 100%. To this end, counselors should report the total variance explained by the factor solution before rotation as well as the sum of each factor’s squared structure coefficient following an oblique factor rotation.

Following factor rotation, researchers examine a number of factor retention criteria to determine the items that load on each factor (Watson, 2017). Commonality values (h2) represent the proportion of variance that the extracted factor solution explains for each item. Items with h2 values that range between .30 and .99 should be retained, as they share an adequate amount of shared variance with the other items and factors (Watson, 2017). Items with small h2 values (< .30) should be considered for removal. However, commonality values should not be too high (≥ 1), as this suggests one’s sample size was insufficient or too many factors were extracted (Watson, 2017). Items with problematic h2 values should be removed one at a time, and the EFA should be re-computed after each removal because these values will fluctuate following each deletion. Oblique factor rotation methods produce two matrices, including the pattern matrix, which displays the relationship between the items and a factor while controlling for the items’ association with the other factors, and the structure matrix, which depicts the correlation between the items and all of the factors (Mvududu & Sink, 2013). Researchers should examine both the pattern and the structure matrices and interpret the one that displays the clearest evidence of simple structure with the least evidence of cross-loadings.

Items should display a factor loading of at least ≥ .40 (≥ .50 is desirable) to mark a factor. Items that fail to meet a minimum factor loading of ≥ .40 should be deleted. Cross-loading is evident when an item displays factor loadings ≥ .30 to .35 on two or more factors (Beavers et al., 2013; Mvududu & Sink, 2013; Watson, 2017). Researchers may elect to assign a variable to one factor if that item’s loading is .10 higher than the next highest loading. Items that cross-load might also be deleted. Once again, items should be deleted one at a time and the EFA should be re-computed after each removal.

Naming the Rotated Factors
     The final step in EFA is naming the rotated factors; factor names should be brief (approximately one to four words) and capture the theoretical meaning of the group of items that comprise the factor (Mvududu & Sink, 2013). This is a subjective process, and the literature is lacking consistent guidelines for the process of naming factors. A research team can be incorporated into the process of naming their factors. Test developers can separately name each factor and then meet with their research team to discuss and eventually come to an agreement about the most appropriate name for each factor.

Confirmatory Factor Analysis
     CFA is an application of structural equation modeling for testing the extent to which a hypothesized factor solution (e.g., the factor solution that emerged in the EFA or another existing factor solution) demonstrates an adequate fit with a different sample (Kahn, 2006; Lewis, 2017). When validating scores on a new test, investigators should compute both EFA and CFA with two different samples from the same population, as the emergent internal structure in EFA can vary substantially. Researchers can collect two sequential samples or they may elect to collect one large sample and divide it into two smaller samples, one for EFA and the second for CFA.

Evaluating model fit in CFA is a complex task that is typically determined by examining the collective implications of multiple goodness-of-fit (GOF) indices, which include absolute, incremental, and parsimonious (Lewis, 2017). Absolute fit indices evaluate the extent to which the hypothesized model or the dimensionality of the existing measure fits with the data collected from a new sample. Incremental fit indices compare the improvement in fit between the hypothesized model and a null model (also referred to as an independence model) in which there is no correlation between observed variables. Parsimonious fit indices take the model’s complexity into account by testing the extent to which model fit is improved by estimating fewer pathways (i.e., creating a more parsimonious or simple model). Psychometric researchers generally report a combination of absolute, incremental, and parsimonious fit indices to demonstrate acceptable model fit (Mvududu & Sink, 2013). Table 1 includes tentative guidelines for interpreting model fit based on the synthesized recommendations of leading psychometric researchers from a comprehensive search of the measurement literature (Byrne, 2016; Dimitrov, 2012; Fabrigar et al., 1999; Hooper et al., 2008; Hu & Bentler, 1999; Kahn, 2006; Lewis, 2017; Mvududu & Sink, 2013; Schreiber et al., 2006; Worthington & Whittaker, 2006).

Table 1

Fit Indices and Tentative Thresholds for Evaluating Model Fit

Note. The fit indices and benchmarks to estimate the degree of model fit in this table are offered as tentative guidelines for scores on attitudinal measures based on the synthesized recommendations of numerous psychometric researchers (see citations in the “Confirmatory Factor Analysis” section of this article). The list of fit indices in this table are not all-inclusive (i.e., not all of them are typically reported). There is no universal approach for determining which fit indices to investigate nor are there any absolute thresholds for determining the degree of model fit. No single fix index is sufficient for determining model fit. Researchers are tasked with selecting and interpreting fit indices holistically (i.e., collectively), in ways that make both statistical and substantive sense based on their construct of measurement and goals of the study.
*.90 to .94 can denote an acceptable model fit for incremental fix indices; however, the majority of values should be ≥ .95.

 

Model Respecification
     The results of a CFA might reveal a poor or unacceptable model fit (see Table 1), indicating that the dimensionality of the hypothesized model that emerged from the EFA was not replicated or confirmed with a second sample (Mvududu & Sink, 2013). CFA is a rigorous model-fitting procedure and poor model fit in a CFA might indicate that the EFA-derived factor solution is insufficient for appraising the construct of measurement. CFA, however, is a more stringent test of structural validity than EFA, and psychometric researchers sometimes refer to the modification indices (also referred to as Lagrange multiplier statistics), which denote the expected decrease in the X2 value (i.e., degree of improvement in model fit) if the parameter is freely estimated (Dimitrov, 2012). In these instances, correlating the error terms between items or removing problematic items will improve model fit; however, when considering model respecification, psychometric researchers should proceed cautiously, if at all, as a strong theoretical justification is necessary to defend model respecification (Byrne, 2016; Lewis, 2017; Schreiber et al., 2006). Researchers should also be clear that model respecification causes the CFA to become an EFA because they are investigating the dimensionality of a different or modified model rather than confirming the structure of an existing, hypothesized model.

Higher-Order CFA
     Higher-order CFA is an extension of CFA that allows researchers to test nested models and determine if a second-order latent variable (factor) explains the associations between the factors in a single-order CFA (Credé & Harms, 2015). Similar to single-order CFA (see Figure 3, Model 1) in which the test items cluster together to form the factors or subscales, higher-order CFA reveals if the factors are related to one another strongly enough to suggest the presence of a global factor (see Figure 3, Model 3). Suppose, for example, the test developer of a scale for measuring dimensions of the therapeutic alliance confirmed the three following subscales via single-order CFA (see Figure 3, Model 1): Empathy, Unconditional Positive Regard, and Congruence. Computing a higher-order CFA would reveal if a higher-order construct, which the research team might name Therapeutic Climate, is present in the data. In other words, higher-order CFA reveals if Empathy, Unconditional Positive Regard, and Congruence, collectively, comprise the second-order factor of Therapeutic Climate.

Determining if a higher-order factor explains the co-variation (association) between single-order factors is a complex undertaking. Thus, researchers should consider a number of criteria when deciding if their data are appropriate for higher-order CFA (Credé & Harms, 2015). First, moderate-to-strong associations (co-variance) should exist between first-order factors. Second, the unidimensional factor solution (see Figure 3, Model 2) should display a poor model fit (see Table 1) with the data. Third, theoretical support should exist for the presence of a higher-order factor. Referring to the example in the previous paragraph, person-centered therapy provides a theory-based explanation for the presence of a second-order or global factor (Therapeutic Climate) based on the integration of the single-order factors (Empathy, Unconditional Positive Regard, and Congruence). In other words, the presence of a second-order factor suggests that Therapeutic Climate explains the strong association between Empathy, Unconditional Positive Regard, and Congruence.

Finally, the single-order factors should display strong factor loadings (approximately ≥ .70) on the higher-order factor. However, there is not an absolute consensus among psychometric researchers regarding the criteria for higher-order CFA and the criteria summarized in this section are not a dualistic decision rule for retaining or rejecting a higher-order model. Thus, researchers are tasked with presenting that their data meet a number of criteria to justify the presence of a higher-order factor. If the results of a higher-order CFA reveal an acceptable model fit (see Table 1), researchers should directly compare (e.g., chi-squared test of difference) the single-order and higher-order models to determine if one model demonstrates a superior fit with the data at a statistically significant level.

Figure 3

Single-Order, Unidimensional, and Higher-Order Factor Solutions

 

Multiple-Group Confirmatory Factor Analysis
     Multiple-group confirmatory factor analysis (MCFA) is an extension of CFA for testing the factorial invariance (psychometric equivalence) of a scale across subgroups of a sample or population (C.-C. Chen et al., 2020; Dimitrov, 2010). In other words, MCFA has utility for testing the extent to which a particular construct has the same meaning across different groups of a larger sample or population. Suppose, for example, the developer of the Therapeutic Climate scale (see example in the previous section) validated scores on their scale with undergraduate college students. Invariance testing has potential to provide further support for the internal structure validity of the scale by testing whether Empathy, Unconditional Positive Regard, and Congruence have the same meaning across different subgroups of undergraduate college students (e.g., between different gender identities, ethnic identities, age groups, and other subgroups of the larger sample).

     Levels of Invariance. Factorial invariance can be tested in a number of different ways and includes the following primary levels or aspects: (a) configural invariance, (b) measurement (metric, scalar, and strict) invariance, and (c) structural invariance (Dimitrov, 2010, 2012). Configural invariance (also referred to as pattern invariance) serves as the baseline mode (typically the best fitting model with the data), which is used as the point of comparison when testing for metric, scalar, and structural invariance. In layperson’s terms, configural invariance is a test of whether the scales are approximately similar across groups.

Measurement invariance includes testing for metric and scalar invariance. Metric invariance is a test of whether each test item makes an approximately equal contribution (i.e., approximately equal factor loadings) to the latent variable (composite scale score). In layperson’s terms, metric invariance evaluates if the scale reasonably captures the same construct. Scalar invariance adds a layer of rigor to metric invariance by testing if the differences between the average scores on the items are attributed to differences in the latent variable means. In layperson’s terms, scalar invariance indicates that if the scores change over time, they change in the same way.

Strict invariance is the most stringent level of measurement invariance testing and tests if the sum total of the items’ unique variance (item variation that is not in common with the factor) is comparable to the error variance across groups. In layperson’s terms, the presence of strict invariance demonstrates that score differences between groups are exclusively due to differences in the common latent variables. Strict invariance, however, is typically not examined in social sciences research because the latent factors are not composed of residuals. Thus, residuals are negligible when evaluating mean differences in latent scores (Putnick & Bornstein, 2016).

Finally, structural invariance is a test of whether the latent factor variances are equivalent to the factor covariances (Dimitrov, 2010, 2012). Structural invariance tests the null hypothesis that there are no statistically significant differences between the unconstrained and constrained models (i.e., determines if the unconstrained model is equivalent to the constrained model). Establishing structural invariance indicates that when the structural pathways are allowed to vary across the two groups, they naturally produce equal results, which supports the notion that the structure of the model is invariant across both groups. In layperson’s terms, the presence of structural invariance indicates that the pathways (directionality) between variables behave in the same way across both groups. It is necessary to establish configural and metric invariance prior to testing for structural invariance.

     Sample Size and Criteria for Evaluating Invariance. Researchers should check their sample size before computing invariance testing, as small samples (approximately < 200) can overestimate model fit (Dimitrov, 2010). Similar to single-order CFA, no absolute sample size guidelines exist in the literature for invariance testing. Generally, a minimum sample of at least 200 participants per group is recommended for invariance testing (although < 200 to 300+ is advantageous). Referring back to the Therapeutic Climate scale example (see the previous section), investigators would need a minimum sample of 400 if they were seeking to test the invariance of the scale by generational status (200 first generation + 200 non-first generation = 400). The minimum sample size would increase as more levels are added. For example, a minimum sample of 600 would be recommended if investigators quantified generational status on three levels (200 first generation + 200 second generation + 200 third generation and beyond = 600).

Factorial invariance is investigated through a computation of the change in model fit at each level of invariance testing (F. F. Chen, 2007). Historically, the Satorra and Bentler chi-square difference test was the sole criteria for testing factorial invariance, with a non-significant p-value indicating factorial invariance (Putnick & Bornstein, 2016). The chi-square difference test is still commonly reported by contemporary psychometric researchers; however, it is rarely used as the sole criteria for determining invariance, as the test is sensitive to large samples. The combined recommendations of F. F. Chen (2007) and Putnick and Bornstein (2016) include the following thresholds for investigating invariance: ≤ ∆ 0.010 in CFI, ≤ ∆ 0.015 in RMSEA, and ≤ ∆ 0.030 in SRMR for metric invariance or ≤ ∆ 0.015 in SRMR for scalar invariance. In a simulation study, Kang et al. (2016) found that McDonald’s NCI (MNCI) outperformed the CFI in terms of stability. Kang et al. (2016) recommend < ∆ 0.007 in MNCI for the 5th percentile and ≤ ∆ 0.007 in MNCI for the 1st percentile as cutoff values for measurement quality. Strong measurement invariance is achieved when both metric and scalar invariance are met, and weak invariance is accomplished when only metric invariance is present (Dimitrov, 2010).

Exemplar Review of a Psychometric Study

     The following section will include a review of an exemplar psychometric study based on the recommendations for EFA (see Figure 2) and CFA (see Table 1) that are provided in this manuscript. In 2020, I collaborated with Ryan Flinn on the development and validation of scores on the Mental Distress Response Scale (MDRS) for appraising how college students are likely to respond when encountering a peer in mental distress (Kalkbrenner & Flinn, 2020). A total of 13 items were entered into an EFA. Following the steps for EFA (see Figure 1), the sample size (N = 569) exceeded the guidelines for sample size that I published in my 2021 article (Kalkbrenner, 2021b), including an STV of 10:1 or 200 participants, whichever produces a larger sample. Flinn and I (2020) ensured that our 2020 study’s data were consistent with a normal distribution (skewness & kurtosis values ≤ ±1) and computed preliminary assumption checking, including inter-item correlation matrix, KMO (.73), and Bartlett’s test of sphericity (p < .001).

An ML factor extraction method was employed, as the data were largely consistent (skewness & kurtosis values ≤ ±1) with a normal distribution. We used the three most rigorous factor retention criteria—percentage of variance accounted for, scree test, and parallel analysis—to extract a two-factor solution. An oblique factor rotation method (direct oblimin) was employed, as the two factors were correlated. We referred to the recommended factor retention criteria, including h2 values .30 to .99, factor loadings ≥ .40, and cross-loading ≥ .30, to eliminate one item with low commonalities and two cross-loading items. Using a research team, we named the first factor Diminish/Avoid, as each item that marked this factor reflected a dismissive or evasive response to encountering a peer in mental distress. The second factor was named Approach/Encourage because each item that marked this factor included a response to a peer in mental distress that was active and likely to help connect their peer to mental health support services.

Our next step was to compute a CFA by administering the MDRS to a second sample of undergraduate college students to confirm the two-dimensional factor solution that emerged in the EFA. The sample size (N = 247) was sufficient for CFA (STV > 10:1 and > 200 participants). The MDRS items were entered into a CFA and the following GOF indices emerged: CMIN = χ2 (34) = 61.34, p = .003, CMIN/DF = 1.80, CFI = .96, IFI = .96, RMSEA = .06, 90% CI [0.03, 0.08], and SRMR = .04. A comparison between our GOF indices from the 2020 study with the thresholds for evaluating model fit in Table 1 reveal an acceptable-to-strong fit between the MDRS model and the data. Collectively, our 2020 procedures for EFA and CFA were consistent with the recommendations in this manuscript.

Implications for the Profession

Implications for Counseling Practitioners
     Assessment literacy is a vital component of professional counseling practice, as counselors who practice in a variety of specialty areas select and administer tests to clients and use the results to inform diagnosis and treatment planning (C.-C. Chen et al., 2020; Mvududu & Sink, 2013; NBCC, 2016; Neukrug & Fawcett, 2015). It is important to note that test results alone should not be used to make diagnoses, as tests are not inherently valid (Kalkbrenner, 2021b). In fact, the authors of the Diagnostic and Statistical Manual of Mental Disorders stated that “scores from standardized measures and interview sources must be interpreted using clinical judgment” (American Psychiatric Association, 2013, p. 37). Professional counselors can use test results to inform their diagnoses; however, diagnostic decision making should ultimately come down to a counselor’s clinical judgment.

Counseling practitioners can refer to this manuscript as a reference for evaluating the internal structure validity of scores on a test to help determine the extent to which, if any at all, the test in question is appropriate for use with clients. When evaluating the rigor of an EFA for example, professional counselors can refer to this manuscript to evaluate the extent to which test developers followed the appropriate procedures (e.g., preliminary assumption checking, factor extraction, retention, and rotation [see Figure 2]). Professional counselors are encouraged to pay particular attention to the factor extraction method that the test developers employed, as PCA is sometimes used in lieu of more appropriate methods (e.g., PAF/ML). Relatedly, professional counselors should be vigilant when evaluating the factor rotation method employed by test developers because oblique rotation methods are typically more appropriate than orthogonal (e.g., varimax) for counseling tests.

CFA is one of the most commonly used tests of the internal structure validity of scores on psychological assessments (Kalkbrenner, 2021b). Professional counselors can compare the CFA fit indices in a test manual or journal article to the benchmarks in Table 1 and come to their own conclusion about the internal structure validity of scores on a test before using it with clients. Relatedly, the layperson’s definitions of common psychometric terms in Figure 1 might have utility for increasing professional counselors’ assessment literacy by helping them decipher some of the psychometric jargon that commonly appears in psychometric studies and test manuals.

Implications for Counselor Education
     Assessment literacy begins in one’s counselor education program and it is imperative that counselor educators teach their students to be proficient in recognizing and evaluating internal structure validity evidence of test scores. Teaching internal structure validity evidence can be an especially challenging pursuit because counseling students tend to fear learning about psychometrics and statistics (Castillo, 2020; Steele & Rawls, 2015), which can contribute to their reticence and uncertainty when encountering psychometric research. This reticence can lead one to read the methodology section of a psychometric study briefly, if at all. Counselor educators might suggest the present article as a resource for students taking classes in research methods and assessment as well as for students who are completing their practicum, internship, or dissertation who are evaluating the rigor of existing measures for use with clients or research participants.

Counselor educators should urge their students not to skip over the methodology section of a psychometric study. When selecting instrumentation for use with clients or research participants, counseling students and professionals should begin by reviewing the methodology sections of journal articles and test manuals to ensure that test developers employed rigorous and empirically supported procedures for test development and score validation. Professional counselors and their students can compare the empirical steps and guidelines for structural validation of scores that are presented in this manuscript with the information in test manuals and journal articles of existing instrumentation to evaluate its internal structure. Counselor educators who teach classes in assessment or psychometrics might integrate an instrument evaluation assignment into the course in which students select a psychological instrument and critique its psychometric properties. Another way that counselor educators who teach classes in current issues, research methods, assessment, or ethics can facilitate their students’ assessment literacy development is by creating an assignment that requires students to interview a psychometric researcher. Students can find psychometric researchers by reviewing the editorial board members and authors of articles published in the two peer-reviewed journals of the Association for Assessment and Research in Counseling, Measurement and Evaluation in Counseling and Development and Counseling Outcome Research and Evaluation. Students might increase their interest and understanding about the necessity of assessment literacy by talking to researchers who are passionate about psychometrics.

Assessment Literacy: Additional Considerations

Internal structure validity of scores is a crucial component of assessment literacy for evaluating the construct validity of test scores (Bandalos & Finney, 2019). Assessment literacy, however, is a vast construct and professional counselors should consider a number of additional aspects of test worthiness when evaluating the potential utility of instrumentation for use with clients. Reviewing these additional considerations is beyond the scope of this manuscript; however, readers can refer to the following features of assessment literacy and corresponding resources: reliability (Kalkbrenner, 2021a), practicality (Neukrug & Fawcett, 2015), steps in the instrument development process (Kalkbrenner, 2021b), and convergent and divergent validity evidence of scores (Swank & Mullen, 2017). Moreover, the discussion of internal structure validity evidence of scores in this manuscript is based on Classical Test Theory (CTT), which tends to be an appropriate platform for attitudinal measures. However, Item Response Theory (see Amarnani, 2009) is an alternative to CTT with particular utility for achievement and aptitude testing.

Cross-Cultural Considerations in Assessment Literacy
     Professional counselors have an ethical obligation to consider the cross-cultural fairness of a test before use with clients, as the validity of test scores are culturally dependent (American Counseling Association [ACA], 2014; Kane, 2010; Neukrug & Fawcett, 2015; Swanepoel & Kruger, 2011). Cross-cultural fairness (also known as test fairness) in testing and assessment “refers to the comparability of score meanings across individuals, groups or settings” (Swanepoel & Kruger, 2011, p. 10). There exists some overlap between internal structure validity and cross-cultural fairness; however, some distinct differences exist as well.

Using CFA to confirm the factor structure of an established test with participants from a different culture is one way to investigate the cross-cultural fairness of scores. Suppose, for example, an investigator found acceptable internal structure validity evidence (see Table 1) for scores on an anxiety inventory that was normed in America with participants in Eastern Europe who identify with a collectivist cultural background. Such findings would suggest that the dimensionality of the anxiety inventory extends to the sample of Eastern European participants. However, internal structure validity testing alone might not be sufficient for testing the cross-cultural fairness of scores, as factor analysis does not test for content validity. In other words, although the CFA confirmed the dimensionality of an American model with a sample of Eastern European participants, the analysis did not take potential qualitative differences about the construct of measurement (anxiety severity) into account. It is possible (and perhaps likely) that the lived experience of anxiety differs between those living in two different cultures. Accordingly, a systems-level approach to test development and score validation can have utility for enhancing the cross-cultural fairness of scores (Swanepoel & Kruger, 2011).

A Systems-Level Approach to Test Development and Score Validation
     Swanepoel and Kruger (2011) outlined a systemic approach to test development that involves circularity, which includes incorporating qualitative inquiry into the test development process, as qualitative inquiry has utility for uncovering the nuances of participants’ lived experiences that quantitative data fail to capture. For example, an exploratory-sequential mixed-methods design in which qualitative findings are used to guide the quantitative analyses is a particularly good fit with systemic approaches to test development and score validation. Referring to the example in the previous section, test developers might conduct qualitative interviews to develop a grounded theory of anxiety severity in the context of the collectivist culture. The grounded theory findings could then be used as the theoretical framework (see Kalkbrenner, 2021b) for a psychometric study aimed at testing the generalizability of the qualitative findings. Thus, in addition to evaluating the rigor of factor analytic results, professional counselors should also review the cultural context in which test items were developed before administering a test to clients.

Language adaptions of instrumentation are another relevant cross-cultural fairness consideration in counseling research and practice. Word-for-word translations alone are insufficient for capturing cross-cultural fairness of instrumentation, as culture extends beyond just language (Lenz et al., 2017; Swanepoel & Kruger, 2011). Pure word-for-word translations can also cause semantic errors. For example, feeling “fed up” might translate to feeling angry in one language and to feeling full after a meal in another language. Accordingly, professional counselors should ensure that a translated instrument was subjected to rigorous procedures for maintaining cross-cultural fairness. Reviewing such procedures is beyond the scope of this manuscript; however, Lenz et al. (2017) outlined a 6-step process for language translation and cross-cultural adaptation of instruments.

Conclusion

Gaining a deeper understanding of the major approaches to factor analysis for demonstrating internal structure validity in counseling research has potential to increase assessment literacy among professional counselors who work in a variety of specialty areas. It should be noted that the thresholds for interpreting the strength of internal structure validity coefficients that are provided throughout this manuscript should be used as tentative guidelines, not unconditional standards. Ultimately, internal structure validity is a function of test scores and the construct of measurement. The stakes or consequences of test results should be considered when making final decisions about the strength of validity coefficients. As professional counselors increase their familiarity with factor analysis, they will most likely become more cognizant of the strengths and limitations of counseling-related tests to determine their utility for use with clients. The practical overview of factor analysis presented in this manuscript can serve as a one-stop shop or resource that professional counselors can refer to as a reference for selecting tests with validated scores for use with clients, a primer for teaching courses, and a resource for conducting their own research.

 

Conflict of Interest and Funding Disclosure
The author reported no conflict of interest
or funding contributions for the development
of this manuscript.


References

Amarnani, R. (2009). Two theories, one theta: A gentle introduction to item response theory as an alternative to classical test theory. The International Journal of Educational and Psychological Assessment, 3, 104–109.

American Counseling Association. (2014). ACA code of ethics. https://www.counseling.org/resources/aca-code-of-ethics.pdf

American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing. https://www.apa.org/science/programs/testing/standards

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.).
https://doi.org/10.1176/appi.books.9780890425596

Bandalos, D. L., & Finney, S. J. (2019). Factor analysis: Exploratory and confirmatory. In G. R. Hancock, L. M. Stapleton, & R. O. Mueller (Eds.), The reviewer’s guide to quantitative methods in the social sciences (2nd ed., pp. 98–122). Routledge.

Beavers, A. S., Lounsbury, J. W., Richards, J. K., Huck, S. W., Skolits, G. J., & Esquivel, S. L. (2013). Practical considerations for using exploratory factor analysis in educational research. Practical Assessment, Research and Evaluation, 18(5/6), 1–13. https://doi.org/10.7275/qv2q-rk76

Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge.

Castillo, J. H. (2020). Teaching counseling students the science of research. In M. O. Adekson (Ed.), Beginning your counseling career: Graduate preparation and beyond (pp. 122–130). Routledge.

Chen, C.-C., Lau, J. M., Richardson, G. B., & Dai, C.-L. (2020). Measurement invariance testing in counseling. Journal of Professional Counseling: Practice, Theory & Research, 47(2), 89–104.
https://doi.org/10.1080/15566382.2020.1795806

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14(3), 464–504. https://doi.org/10.1080/10705510701301834

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Credé, M., & Harms, P. D. (2015). 25 years of higher-order confirmatory factor analysis in the organizational sciences: A critical review and development of reporting recommendations. Journal of Organizational
Behavior
, 36(6), 845–872. https://doi.org/10.1002/job.2008

Dimitrov, D. M. (2010). Testing for factorial invariance in the context of construct validation. Measurement and Evaluation in Counseling and Development, 43(2), 121–149. https://doi.org/10.1177/0748175610373459

Dimitrov, D. M. (2012). Statistical methods for validation of assessment scale data in counseling and related fields. American Counseling Association.

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299.
https://doi.org/10.1037/1082-989X.4.3.272

Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE.

Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modelling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6(1), 53–60.

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Kahn, J. H. (2006). Factor analysis in counseling psychology research, training, and practice: Principles, advances, and applications. The Counseling Psychologist, 34(5), 684–718. https://doi.org/10.1177/0011000006286347

Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. https://doi.org/10.1007/BF02291575

Kalkbrenner, M. T. (2021a). Alpha, omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation. Advance online publication. https://doi.org/10.1080/21501378.2021.1940118

Kalkbrenner, M. T. (2021b). A practical guide to instrument development and score validation in the social sciences: The MEASURE Approach. Practical Assessment, Research, and Evaluation, 26, Article 1. https://scholarworks.umass.edu/pare/vol26/iss1/1

Kalkbrenner, M. T., & Flinn, R. E. (2020). The Mental Distress Response Scale and promoting peer-to-peer mental health support: Implications for college counselors and student affairs officials. Journal of College Student Development, 61(2), 246–251. https://doi.org/10.1353/csd.2020.0021

Kane, M. (2010). Validity and fairness. Language Testing, 27(2), 177–182. https://doi.org/10.1177/0265532209349467

Kang, Y., McNeish, D. M., & Hancock, G. R. (2016). The role of measurement quality on practical guidelines for assessing measurement and structural invariance. Educational and Psychological Measurement, 76(4), 533–561. https://doi.org/10.1177/0013164415603764

Lenz, A. S., Gómez Soler, I., Dell’Aquilla, J., & Uribe, P. M. (2017). Translation and cross-cultural adaptation of assessments for use in counseling research. Measurement and Evaluation in Counseling and Development, 50(4), 224–231. https://doi.org/10.1080/07481756.2017.1320947

Lewis, T. F. (2017). Evidence regarding the internal structure: Confirmatory factor analysis. Measurement and Evaluation in Counseling and Development, 50(4), 239–247. https://doi.org/10.1080/07481756.2017.1336929

Mvududu, N. H., & Sink, C. A. (2013). Factor analysis in counseling research and practice. Counseling Outcome Research and Evaluation, 4(2), 75–98. https://doi.org/10.1177/2150137813494766

National Board for Certified Counselors. (2016). NBCC code of ethics. https://www.nbcc.org/Assets/Ethics/NBCCCodeofEthics.pdf

Neukrug, E. S., & Fawcett, R. C. (2015). Essentials of testing and assessment: A practical guide for counselors, social workers, and psychologists (3rd ed.). Cengage.

Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Developmental Review, 41, 71–90.  https://doi.org/10.1016/j.dr.2016.06.004

Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. Journal of Educational Research, 99(6), 323–338.
https://doi:10.3200/JOER.99.6.323-338

Steele, J. M., & Rawls, G. J. (2015). Quantitative research attitudes and research training perceptions among master’s-level students. Counselor Education and Supervision, 54(2), 134–146. https://doi.org/10.1002/ceas.12010

Swanepoel, I., & Kruger, C. (2011). Revisiting validity in cross-cultural psychometric-test development: A systems-informed shift towards qualitative research designs. South African Journal of Psychiatry, 17(1), 10–15. https://doi.org/10.4102/sajpsychiatry.v17i1.250

Swank, J. M., & Mullen, P. R. (2017). Evaluating evidence for conceptually related constructs using bivariate correlations. Measurement and Evaluation in Counseling and Development, 50(4), 270–274.
https://doi.org/10.1080/07481756.2017.1339562

Tate, K. A., Bloom, M. L., Tassara, M. H., & Caperton, W. (2014). Counselor competence, performance assessment, and program evaluation: Using psychometric instruments. Measurement and Evaluation in Counseling and Development, 47(4), 291–306. https://doi.org/10.1177/0748175614538063

Watson, J. C. (2017). Establishing evidence for internal structure using exploratory factor analysis. Measurement and Evaluation in Counseling and Development, 50(4), 232–238. https://doi.org/10.1080/07481756.2017.1336931

Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and recommendations for best practices. The Counseling Psychologist, 34(6), 806–838. https://doi.org/10.1177/0011000006288127

Michael T. Kalkbrenner, PhD, NCC, is an associate professor at New Mexico State University. Correspondence may be addressed to Michael T. Kalkbrenner, Department of Counseling and Educational Psychology, New Mexico State University, Las Cruces, NM 88003, mkalk001@nmsu.edu.

 

Development of the Psychological Maltreatment Inventory

Alison M. Boughn, Daniel A. DeCino

 

This article introduces the development and implementation of the Psychological Maltreatment Inventory (PMI) assessment with child respondents receiving services because of an open child abuse and/or neglect case in the Midwest (N = 166). Sixteen items were selected based on the literature, subject matter expert refinement, and readability assessments. Results indicate the PMI has high reliability (α = .91). There was no evidence the PMI total score was influenced by demographic characteristics. A positive relationship was discovered between PMI scores and general trauma symptom scores on the Trauma Symptom Checklist for Children Screening Form (TSCC-SF; r = .78, p = .01). Evidence from this study demonstrates the need to refine the PMI for continued use with children. Implications for future research include identification of psychological maltreatment in isolation, further testing and refinement of the PMI, and exploring the potential relationship between psychological maltreatment and suicidal ideation. 

Keywords: psychological maltreatment, child abuse, neglect, assessment, trauma

 

In 2012, the Centers for Disease Control (CDC; 2012) reported that the total cost of child maltreatment (CM) in 2008, including psychological maltreatment (PM), was $124 billion. Fang et al. (2012) estimated the lifetime burden of CM in 2008 was as high as $585 billion. The CDC (2012) characterized CM as rivaling “other high profile public health problems” (para. 1). By 2015, the National Institutes of Health reported the total cost of CM, based on substantiated incidents, was reported to be $428 billion, a 345% increase in just 7 years; the true cost was predictably much higher (Peterson et al., 2018). Using the sensitivity analysis done by Fang et al. (2012), the lifetime burden of CM in 2015 may have been as high as $2 trillion. If these trends continue unabated, the United States could expect a total cost for CM, including PM, of $5.1 trillion by 2030, with a total lifetime cost of $24 trillion. More concerning, this increase would not account for any impact from the COVID-19 pandemic.

Mental health first responders and child protection professionals may encounter PM regularly in their careers (Klika & Conte, 2017; U.S. Department of Health and Human Services [DHHS], 2018). PM experiences are defined as inappropriate emotional and psychological acts (e.g., excessive yelling, threatening language or behavior) and/or lack of appropriate acts (e.g., saying I love you) used by perpetrators of abuse and neglect to gain organizational control of their victims (American Professional Society on the Abuse of Children [APSAC], 2019; Klika & Conte, 2017; Slep et al., 2015). Victims may experience negative societal perceptions (i.e., stigma), fear of retribution from caregivers or guardians, or misdiagnosis by professional helpers (Iwaniec, 2006; López et al., 2015). They often face adverse consequences that last their entire lifetime (Spinazzola et al., 2014; Tyrka et al., 2013; Vachon et al., 2015; van der Kolk, 2014; van Harmelen et al., 2010; Zimmerman & Mercy, 2010). PM can be difficult to identify because it leaves no readily visible trace of injury (e.g., bruises, cuts, or broken bones), making it complicated to substantiate that a crime has occurred (Ahern et al., 2014; López et al., 2015). Retrospective data outlines evaluation processes for PM identification in adulthood; however, childhood PM lacks a single definition and remains difficult to assess (Tonmyr et al., 2011). These complexities in identifying PM in children may prevent mental health professionals from intervening early, providing crucial care, and referring victims for psychological health services (Marshall, 2012; Spinazzola et al., 2014). The Psychological Maltreatment Inventory (PMI) is the first instrument of its kind to address these deficits.

Child Psychological Maltreatment
     Although broadly conceptualized, child PM experiences are described as literal acts, events, or experiences that create current or future symptoms that can affect a victim without immediate physical evidence (López et al., 2015). Others have extended child PM to include continued patterns of severe events that impede a child from securing basic psychological needs and convey to the child that they are worthless, flawed, or unwanted (APSAC, 2019). Unfortunately, these broad concepts lack the specificity to guide legal and mental health interventions (Ahern et al., 2014). Furthermore, legal definitions of child PM vary from jurisdiction to jurisdiction and state to state (Spinazzola et al., 2014). The lack of consistent definitions and quantifiable measures of child PM may create barriers for prosecutors and other helping professionals within the legal system as well as a limited understanding of PM in evidence-based research (American Psychiatric Association [APA], 2013; APSAC, 2019; Klika & Conte, 2017). These challenges are exacerbated by comorbidity with other forms of maltreatment.

Co-Occurring Forms of Maltreatment
     According to DHHS (2018), child PM is rarely documented as occurring in isolation compared to other forms of maltreatment (i.e., physical abuse, sexual abuse, or neglect). Rather, researchers have found PM typically coexists with other forms of maltreatment (DHHS, 2018; Iwaniec, 2006; Marshall, 2012). Klika and Conte (2017) reported that perpetrators who use physical abuse, inappropriate language, and isolation facilitate conditions for PM to coexist with other forms of abuse. Van Harmelen et al. (2011) argued that neglectful acts constitute evidence of PM (e.g., seclusion; withholding medical attention; denying or limiting food, water, shelter, and other basic needs).

Consequences of PM Experienced in Childhood
     Mills et al. (2013) and Greenfield and Marks (2010) noted PM experiences in early childhood might manifest in physical growth delays and require access to long-term care throughout a victim’s lifetime. Children who have experienced PM may suffer from behaviors that delay or prevent meeting developmental milestones, achieving academic success in school, engaging in healthy peer relationships, maintaining physical health and well-being, forming appropriate sexual relationships as adults, and enjoying satisfying daily living experiences (Glaser, 2002; Maguire et al., 2015). Neurological and cognitive effects of PM in childhood impact children as they transition into adulthood, including abnormalities in the amygdala and hippocampus (Tyrka at al., 2013). Brown et al. (2019) found that adults who reported experiences of CM had higher rates of negative responses to everyday stress, a larger constellation of unproductive coping skills, and earlier mortality rates (Brown et al., 2019; Felitti et al., 1998). Furthermore, adults with childhood PM experiences reported higher rates of substance abuse than those compared to control groups (Felitti et al., 1998).

     Trauma-Related Symptomology. Researchers speculate that children exposed to maltreatment and crises, especially those that come without warning, are at greater risk for developing a host of trauma-related symptoms (Spinazzola et al., 2014). Developmentally, children lack the ability to process and contextualize their lived experiences. Van Harmelen et al. (2010) discovered that adults who experienced child PM had decreased prefrontal cortex mass compared to those without evidence of PM. Similarly, Field et al. (2017) found those unable to process traumatic events produced higher levels of stress hormones (i.e., cortisol, epinephrine, norepinephrine); these hormones are produced from the hypothalamic-pituitary-adrenal (HPA) and sympathetic-adrenal-medullary (SAM) regions in the brain. Some researchers speculate that elevated levels of certain hormones and hyperactive regions within the brain signal the body’s biological attempt to reduce the negative impact of PM through the fight-flight-freeze response (Porges, 2011; van der Kolk, 2014).

Purpose of Present Study
     At the time of this research, there were few formal measures using child self-report to assess how children experience PM. We developed the PMI as an initial quantifiable measure of child PM for children and adolescents between the ages of 8 and 17, as modeled by Tonmyr and colleagues (2011). The PMI was developed in multiple stages, including 1) a review of the literature, 2) a content validity survey with subject matter experts (SMEs), 3) a pilot study (N = 21), and 4) a large sample study (N = 166). An additional instrument, the Trauma Symptom Checklist for Children Screening Form (TSCC-SF; Briere & Wherry, 2016), was utilized in conjunction with the PMI to explore occurrences of general trauma symptoms among respondents. The following four research questions were investigated:

  1. How do respondent demographics relate to PM?
  2. What is the rate of PM experience with respondents who are presently involved in an open CM case?
  3. What is the co-occurrence of PM among various forms of CM allegations?
  4. What is the relationship between the frequency of reported PM experiences and the frequency of general trauma symptoms?

Method

Study 1: PMI Item Development and Pilot
     Following the steps of scale construction (Heppner et al., 2016), the initial version of the PMI used current literature and definitions from facilities nationwide that provide care for children who have experienced maltreatment and who are engaged with court systems, mental health agencies, or social services. Our lead researcher, Alison M. Boughn, developed a list of 20 items using category identifications from Glaser (2002) and APSAC (2019). Items were also created using Slep et al.’s (2015) proposed inclusion language for the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) diagnostic codes and codes from the International Classification of Diseases, 11th edition (ICD-11) definition criteria (APA, 2013). Both Boughn and Daniel A. DeCino, our other researcher, reviewed items for consistency with the research literature and removed four redundant items. The final 16 items were reevaluated for readability for future child respondents using a web-based, age range–appropriate readability checker (Readable, n.d.) and were then presented to local SMEs in a content validity survey to determine which would be considered essential for children to report as part of a child PM assessment.

Expert Validation
     A multidisciplinary team (MDT) serving as SMEs completed an online content validity survey created by Boughn. The survey was distributed by a Child Advocacy Center (CAC) manager to the MDT. Boughn used the survey results to validate the PMI’s item content relevance. Twenty respondents from the following professions completed the survey: mental health (n = 6), social services (n = 6), law enforcement (n = 3), and legal services (n = 5). The content validity ratio (CVR) was then calculated for the 16 proposed items.

     Results. The content validity survey scale used a 3-point Likert-type scale: 0 = not necessary; 1 = useful, but not essential; and 2 = essential. A minimum of 15 of the 20 SMEs (75% of the sample), or a CVR ≥ .5, was required to deem an item essential (Lawshe, 1975). The significance level for each item’s content validity was set at α = .05 (Ayre & Scally, 2014). After conducting Lawshe’s (1975) CVR and applying the ratio correction developed by Ayre and Scally (2014), it was determined that eight items were essential: Item 2 (CVR = .7), Item 3 (CVR = .9), Item 4 (CVR = .6), Item 6 (CVR = .6), Item 7 (CVR = .8), Item 10 (CVR = .6), Item 15 (CVR = .5), and Item 16 (CVR = .6).

Upon further evaluation, and in an effort to ensure that the PMI items served the needs of interdisciplinary professionals, some items were rated essential for specific professions; these items still met the CVR requirements (CVR = 1) for the smaller within-group sample. These four items were unanimously endorsed by SMEs for a particular profession as essential: Item 5 (CVR Social Services = 1; CVR Law Enforcement = 1), Item 11 (CVR Law Enforcement = 1), Item 13 (CVR Law Enforcement = 1), and Item 14 (CVR Law Enforcement = 1).

Finally, an evaluation of the remaining four items was completed to explore if items were useful, but not essential. Using the minimum CVR ≥ .5, it was determined that these items should remain on the PMI: Item 1 (CVR = .9), Item 8 (CVR = .8), Item 9 (CVR = .9), and Item 12 (CVR = .9). The use of Siegle’s (2017) Reliability Calculator determined the Cronbach’s α level for the PMI to be 0.83, indicating adequate internal consistency. Additionally, a split-half (odd-even) correlation was completed with the Spearman-Brown adjustment of 0.88, indicating high reliability (Siegle, 2017).

Pilot Summary
     The focus of the pilot study was to ensure effective implementation of the proposed research protocol following each respondent’s appointment at the CAC research site. The pilot was implemented to ensure research procedures did not interfere with typical appointments and standard procedures at the CAC. Participation in the PMI pilot was voluntary and no compensation was provided for respondents.

     Sample. The study used a purposeful sample of children at a local, nationally accredited CAC in the Midwest; both the child and the child’s legal guardian agreed to participate. Because of the expected integration of PM with other forms of abuse, this population was selected to help create an understanding of how PM is experienced specifically with co-occurring cases of maltreatment. Respondents were children who (a) had an open CM case with social services and/or law enforcement, (b) were scheduled for an appointment at the CAC, and (c) were between the ages of 8 and 17.

     Measures. The two measures implemented in this study were the developing PMI and the TSCC-SF. At the time of data collection, CAC staff implemented the TSCC-SF as a screening tool for referral services during CAC victim appointments. To ensure the research process did not interfere with chain-of-custody procedures, collected investigative testimony, or physical evidence that was obtained, the PMI was administered only after all normally scheduled CAC procedures were followed during appointments.

     PMI. The current version of the PMI is a self-report measure that consists of 16 items on a 4-point Likert-type scale that mirrors the language of the TSCC-SF (0 = never to 3 = almost all the time). Respondents typically needed 5 minutes complete the PMI. Sample items from the PMI included questions like: “How often have you been told or made to feel like you are not important or unlovable?” The full instrument is not provided for use in this publication to ensure the PMI is not misused, as refinement of the PMI is still in progress.

     TSCC-SF. In addition to the PMI, Boughn gathered data from the TSCC-SF (Briere & Wherry, 2016) because of its widespread use among clinicians to efficiently assess for sexual concerns, suicidal ideation frequency, and general trauma symptoms such as post-traumatic stress, depression, anger, disassociation, and anxiety (Wherry et al., 2013). The TSCC-SF measures a respondent’s frequency of perceived experiences and has been successfully implemented with children as young as 8 years old (Briere, 1996). The 20-item form uses a 4-point Likert-type scale (0 = never to 3 = almost all the time) composed of general trauma and sexual concerns subscales. The TSCC-SF has demonstrated high internal consistency and alpha values in the good to excellent ranges; it also has high intercorrelations between sexual concerns and other general trauma scales (Wherry & Dunlop, 2018).

     Procedures. Respondents were recruited during their scheduled CAC appointment time. Each investigating agency (law enforcement or social services) scheduled a CAC appointment in accordance with an open maltreatment case. At the beginning of each respondent’s appointment, Boughn provided them with an introduction and description of the study. This included the IRB approvals from the hospital and university, an explanation of the informed consent and protected health information (PHI) authorization, and assent forms. Respondents aged 12 and older were asked to read and review the informed consent document with their legal guardian; respondents aged from 8 to 11 were provided an additional assent document to read. Respondents were informed they could stop the study at any time. After each respondent and legal guardian consented, respondents proceeded with their CAC appointment.

Typical CAC appointments consisted of a forensic interview, at times a medical exam, and administration of the TSCC-SF to determine referral needs. After these steps were completed, Boughn administered the PMI to those who agreed to participate in this research study. Following the completion of the TSCC-SF, respondents were verbally reminded of the study and asked if they were still willing to participate by completing the PMI. Willing respondents completed the PMI; afterward, Boughn asked respondents if they were comfortable leaving the assessment room. In the event the respondent voiced additional concerns of maltreatment during the PMI administration, Boughn made a direct report to the respondent’s investigator (i.e., law enforcement officer or social worker assigned to the respondent’s case).

Boughn accessed each respondent’s completed TSCC-SF from their electronic health record in accordance with the PHI authorization and consent after the respondent’s appointment. Data completed on the TSCC-SF allowed Boughn to gather information related to sexual concerns, suicidal ideation, and trauma symptomology. Data gathered from the TSCC-SF were examined with each respondent’s PMI responses.

     Results. Respondents were 21 children (15 female, six male) with age ranges from 8 to 17 years with a median age of 12 years. Respondents described themselves as White (47.6%), Biracial (14.2%), Multiracial (14.2%), American Indian/Alaskan Native (10.0%), Black (10.0%), and Hispanic/Latino (5.0%). CM allegations for the respondents consisted of allegations of sexual abuse (86.0%), physical abuse (10.0%), and neglect (5.0%).

Every respondent’s responses were included in the analyses to ensure all maltreatment situations were considered. The reliability of the PMI observed in the pilot sample (N = 21) demonstrated high internal consistency with all 16 initial items (α = .88). The average total score on the PMI in the pilot was 13.29, with respondents’ scores ranging from 1 to 30. A Pearson correlation indicated total scores for the PMI and General Trauma Scale scores (reported on the TSCC-SF) were significantly correlated (r = .517, p < .05).

Study 2: Full Testing of the PMI
     The next phase of research proceeded with the collection of a larger data sample (N = 166) to explore the item construct validity and internal reliability (Siyez et al., 2020). Study procedures, data collection, and data storage followed in the pilot study were also implemented with the larger sample. Boughn maintained tracking of respondents who did not want to participate in the study or were unable to because of cognitive functioning level, emergency situations, and emotional dysregulation concerns.

Sample
     Based on a power analysis performed using the Raosoft (2004) sample size calculator, the large sample study required a minimum of 166 respondents for statistical significance (Ali, 2012; Heppner et al., 2016). The sample size was expected to account for a 10% margin of error and a 99% confidence level. The calculation of a 99% confidence interval was used to ensure the number of respondents could effectively represent the population accessed within the CAC based on the data from the CM Report (DHHS, 2018). Large sample population data was gathered between September 2018 and May 2019.

Measures
     The PMI and TSCC-SF were also employed in Study 2 because of their successful implementation in the pilot. Administration of the TSCC-SF ensured a normed and standardized measure could aid in providing context to the information gathered on the PMI. No changes were made to the PMI or TSCC-SF measures following the review of procedures and analyses in the pilot.

Procedures
     Recruitment and data collection/analyses processes mirrored that of the pilot study. Voluntary respondents were recruited at the CAC during their scheduled appointments. Respondents completed an informed consent, child assent, PHI authorization form, TSCC-SF, and PMI. Following the completion of data collection, Boughn completed data entry in the electronic health record to de-identify and analyze the results.

Results

Demographics
     All data were analyzed using Statistical Package for the Social Sciences version 24 (SPSS-24). Initial data evaluation consisted of exploration of descriptive statistics, including demographic and criteria-based information related to respondents’ identities and case details. Respondents were between 8 to 17 years of age (M = 12.39) and primarily female (73.5%, n = 122), followed by male (25.3%, n = 42). Additionally, two respondents (n = 2) reported both male and female gender identities. Racial identities were marked by two categories: White (59.6%, n = 99) and Racially Diverse (40.4%, n = 67) respondents. The presenting maltreatment concerns and the child’s relationship to the offender are outlined in Table 1 and Table 2, respectively.

Reliability and Validity of the PMI
     The reliability of the PMI observed in its implementation in Study 2 (N = 166) showed even better internal consistency with all 16 initial items (α =.91) than observed in the pilot. Using the Spearman-Brown adjustment (Warner, 2013), split-half reliability was calculated, indicating high internal reliability (.92). Internal consistencies were calculated using gender identity and age demographic variables (see Table 3).

 

Table 1

Child Maltreatment Allegation by Type (N = 166)

Allegation f Rel f cf %
Sexual Abuse 113 0.68 166 68.07
Physical Abuse  29 0.17 53 17.47
Neglect  14 0.08 24   8.43
Multiple Allegations    6 0.04 10   3.61
Witness to Violence    3 0.02   4   1.81
Kidnapping    1 0.01   1   0.60

Note. Allegation type reported at initial appointment scheduling

 

Table 2

Identified Offender by Relationship to Victim (N = 166)

Offender Relationship f Rel f cf %
Other Known Adult 60 0.36 166 36.14
Parent 48 0.29 106 28.92
Other Known Child (≤ age 15 years) 15 0.09  58   9.04
Sibling-Child (≤ age 15 years) 10 0.06  43   6.02
Unknown Adult   9 0.05  33     5.42
Step-Parent   8 0.05  24   4.82
Multiple Offenders   6 0.04  16   3.61
Grandparent   6 0.04  10   3.61
Sibling-Adult (≥ age 16 years)   3 0.02   4   1.81
Unknown Child (≤ age 15 years)   1 0.01   1   0.60

Note. Respondent knew the offender (n =156); Respondent did not know offender (n =10)

 

Table 3

Internal Consistency Coefficients (α) by Gender Identity and Age (N = 166)

Gender n α M SD
 Female 122 0.90 13.2   9.1
 Male   42 0.94 13.5 11.0
 Male–Female    2 0.26   8.5  2.5
Age
 8–12 83 0.92 12.75 10.06
 13–17 83 0.90 13.69   9.01

Note. SD = Standard Deviation; M = Mean

 

Respondents Demographic Characteristics and PM Experiences
For Research Question (RQ) 1 and RQ2, descriptive data were used to generate frequencies and determine the impact of demographic characteristics on average PMI score. To explore this further in RQ1, one-way ANOVAs were completed for the variables of age, gender, racial identity, allegation type, and offender relationships. No significant correlations were found between demographic variables and the PMI items. On average, respondents reported a frequency score of 13.5 (M = 13.5, SD = 9.5) on the PMI. Eight respondents (5%) endorsed no frequency of PM while 95% (N = 158) experienced PM.

Co-Occurrence of PM With Other Forms of Maltreatment
     For RQ3, frequency and descriptive data were generated, revealing average age rates of PM reported by maltreatment type. Varying sample representations were discovered in each form of maltreatment (see Table 4). Clear evidence was found that PM co-occurs with each form of maltreatment type; however, how each form of maltreatment interacts with PM is currently unclear given the multiple dimensions of each maltreatment case including, but not limited to, severity, frequency, offender, and victim characteristics.

 

Table 4

Descriptive and Frequency Data for Co-Occurrence of PM (N = 166)

Allegation n M SD 95% CI
Sexual Abuse 113 13.04   9.01 [11.37, 14.72]
Physical Abuse   29 12.45 10.53   [8.44, 16.45]
Neglect   14 14.57 12.16   [7.55, 21.60]
Multiple Allegations    5 17.40   8.88   [6.38, 28.42]
Witness to Violence    3   7.67   5.03  [–4.84, 20.17]
Kidnapping    1 n/a n/a Missing

Note. CI = Confidence Interval; SD = Standard Deviation; M = Mean; n/a = not applicable

 

PM Frequency and General Trauma Symptoms
     For RQ4, Pearson’s correlation was used to calculate frequency score relationships between the PMI and TSCC-SF. There was a statistically significant relationship between the PMI and total frequency of general trauma symptoms on the TSCC-SF [r(164) = .78, p < .01, r² = .61] (Sullivan & Feinn, 2012). Cohen’s d, calculated from the means for each item as well as the pooled standard deviation, indicated a small effect relationship (d = .15) between general trauma and PMI frequencies (see Figure 1).

 

Figure 1

Correlation Between PMI and TSCC-SF General Trauma Subscale

Note. Scores were endorsed by respondents’ self-reports.

 

Child Suicidal Ideation Reports and the PMI
     Following a review of the findings of Thompson et al. (2005) and Wherry et al. (2013) that children who reported experiencing CM also experienced suicidal ideation, Boughn performed an additional two-way ANOVA that examined the effect of suicidal ideation on the PMI total score. A significant relationship—F(1, 164) = 49.52, p < .01, η2 = .23—between respondents’ PMI scores and thoughts of suicide was found. Respondents who did not report thoughts of suicide (59.0%, n = 98) indicated lower rates of PM (M = 9.37, SD = 7.97) compared to children who did report thoughts of suicide (41.0%, n = 68, M = 18.77, SD = 9.12). A preliminary review of this finding demonstrates the severity of PM’s impact on child victims.

Discussion

This study was designed with the aim of developing a tool to support accurate identification of PM among children and adolescents. Findings from its first large-scale implementation provide a foundational view to the occurrence of PM in terms of demographic characteristics, comorbidity of PM with other forms of abuse, and the relationship between PM and trauma. The analyses yielded both expected and unexpected results based on the extant research.

PM and Demographic Characteristics
Race
     There was no significant effect when exploring the data related to racial demographics and PM. The respondent sample closely reflected the geographical area’s known racial demographics at the time of the study, reflecting a population approaching 80% White with residents of all other known races below 5% for each racial group (U.S. Census Bureau, 2020). Although researchers (Dakil et al., 2011) anticipated children identifying as racial minorities would be included in the representation of CM reports, evidence from this study potentially reveals a greater than expected gap in reporting for minority-race populations (Bernard & Harris, 2018; Font & Maguire-Jack, 2015). This suggests that there may be additional, unidentified barriers influencing the reporting of maltreatment among minority-race populations.

Gender
     A lack of gender identity representation was evident in the data, consistent with prior research (Sivagurunathan et al., 2019). Respondents who self-identified with both male and female gender identities (1.2%) and as male (25.3%) were represented less frequently compared to female respondents (73.5%). This is not inherently a limitation of this study, as research shows that just 10% of males in the United States report their sexual abuse (Sivagurunathan et al., 2019). People who identify as male may face harmful cultural messages that enhance negative stigma for victims of abuse, causing increased feelings of weakness or vulnerability (Alaggia & Mishna, 2014). This finding may support claims that male trauma survivors feel stigmatized and report their experiences less frequently (Easton, 2012).

Additionally, children who identify outside traditional gender binary norms and definitions need more access to inclusive representation on screening assessments. Assessments like the TSCC-SF may be using antiquated gender- or biological sex–normed checkboxes, which leave certain groups underrepresented in research studies (Neukrug & Fawcett, 2015). These practices may present inaccurate findings, inadvertently reinforce discriminatory expectations, and generate inaccurate referrals. Non-binary youth encounter barriers that may compound their ability to effectively access supports in their daily life related to coming out, social violence, lack of peer and/or adult acceptance, discrimination, isolation, higher rates of suicide, and lack of representation in mainstream society (Bialer & McIntosh, 2016; Zimman, 2009). In this study, representation of non-binary respondents, specifically those who reported both male and female gender identities, was reported; this warrants further exploration to assess barriers among non-binary gender youth and their experiences with child PM (Bos et al., 2019).

Offender Relationships
     Frequency data for a child’s relationship with the offender were not found to be significant either for known offenders (M = 13.35) or unknown offenders (M = 11.2). In this study, 94% of the respondents already knew their offender (n = 156). This finding is consistent with previous research that has found that although child abduction and stranger danger are real phenomena, children are more likely to experience CM as a result of relationships with familiar individuals (Walsh & Brandon, 2011).

Co-Occurrence of PM With Other Abuse
     Only eight respondents (5%) endorsed no frequency of PM; the average total PM frequency rate for respondents in this study was 13.5 out of a possible 48, indicating extreme severity. In this study, we found evidence that PM is a co-occurring experience for children with open maltreatment cases, yet clinicians still lack formal, valid assessments to determine PM alone. Our findings support the National Children’s Alliance’s (NCA; 2016) call for clinicians to follow practice guidelines in accordance with state and national guidelines as they relate to mandatory reporting of CM concerns and determination of whether PM plus other forms of maltreatment may be present for child victims seeking services.

Comorbidity of PM and Trauma
     PM-related experiences on the PMI and general trauma symptoms from the TSCC-SF warrant discussion. The PMI illustrated a significant relationship with the TSCC-SF general trauma subscale (Briere & Wherry, 2016). More than half (61%) of the variance on the PMI was connected to general trauma symptoms, suggesting that higher rates of PM experiences may increase trauma-related symptoms. For example, previous researchers have found adverse childhood experiences and signs of trauma-related symptoms lead to serious mental health diagnoses, early mortality, and/or significant biological health risks in children (Tyrka et al., 2013; Vachon et al., 2015; Zimmerman & Mercy, 2010). Further exploration to determine if and how PM influences other trauma-related symptoms in children throughout their life span would expand upon the results of this study.

Suicidal Ideation
     Finally, our data revealed a significant effect between respondent endorsement of suicidal ideation and PMI total scores. PM experiences accounted for 23% of the variance for children who reported thoughts of suicide (41%, n = 68) compared to those who did not report thoughts of suicide (59%, n = 98). This finding is consistent with prior research exploring children’s experiences with maltreatment and suicidal thoughts (Thompson et al., 2005; Wherry et al., 2013).

Limitations
     This study has several limitations. First, by developing the PMI using national definitions, some regional and localized nuances were not considered. Second, data collected for this study were from a single Midwest CAC; thus, the data are limited in geographic generalizability. Third, the majority of respondents were White, and a more diverse sample would have been more representative of the region in which data were collected. Fourth, 99% of respondents identified as either male or female and may reflect an underrepresentation of non-binary or gender fluid youth in the results of this study. Fifth, this study relied heavily on quantitative data, which limited the ability to analyze each individual’s experiences with PM as they might describe from their unique perspectives.

Implications for Research and Practice
     The results of this study provide several areas for future research. While the PMI demonstrated good internal consistency across all items (α =.91), more research with diverse populations across the United States is needed. Research from other geographical locations may demonstrate how reporting patterns for PM interact with ethnicity, culture, and elements of social expectations (Spinazzola et al., 2014).

The initial results of this study indicate the PMI may be a useful tool for children to report PM experiences in CAC settings; however, future research at other CACs and similar treatment facilities is needed to determine the PMI’s true utility and scalability. Future analysis (i.e., exploratory factor analysis and confirmatory factor analysis) of the PMI may also identify factors and help refine the instrument.

More research with the PMI can expand researchers’ knowledge of PM and services needed to help children. Working with other CACs, child protection professionals, and the NCA may help bridge current gaps in interdisciplinary assessment and care and establish a stable and comprehensive understanding of PM (López et al., 2015). Furthermore, understanding how CACs are equipped to identify and handle PM cases may provide useful insights to help improve services for children in need. Although some CACs may have a variety of professionals working in specific roles, some CACs may be understaffed, causing staff to take on multiple and overlapping roles. It is important to understand if and how different combinations of trained professionals influence children reporting PM (Hart & Glaser, 2011; NCA, 2016).

More research with the PMI is needed for refinement and to ensure the instrument is not misused. Releasing the PMI at this stage to clinicians and researchers without a fully developed assessment manual may lead to inappropriate or ineffective administration of the PMI and potentially unethical practice that could place children at risk. Future research and refinement of the PMI may provide clinicians and researchers a reliable and valid tool that is grounded in consistent theory and practice.

Conclusion

The PMI was developed to assess child PM and offers researchers and clinicians useful findings. In supporting research (Arslan, 2017; Bernstein et al., 2013; Raparia et al., 2016), child PM is a serious and often harmful combination of experiences that requires professional intervention (APSAC, 2019). For children reporting PM experiences, the PMI may help mental health and other care providers determine which services are needed. Findings from this study suggest differences in demographic variables are minimal for PM. Overall PMI scores were correlated to the general trauma subscale on the TSCC-SF, and the PMI revealed higher rates of PM for children experiencing suicidal ideation. The findings are the beginning of a measure designed to illustrate the depth and frequency of PM for children. With the PMI, early PM intervention becomes possible for a once invisible form of maltreatment.

Conflict of Interest and Funding Disclosure
Data collected and content shared in this study
were part of a dissertation study, which was
awarded the 2020 Dissertation Excellence Award
by the National Board for Certified Counselors.
The Psychological Maltreatment Inventory (PMI)
items were not released in this publication to protect
victims of child maltreatment and to ensure future
publications can address comprehensive revisions
made to the PMI.

 

References

Ahern, E. C., Hershkowitz, I., Lamb, M. E., Blasbalg, U., & Winstanley, A. (2014). Support and reluctance in the pre-substantive phase of alleged child abuse victim investigative interviews: Revised versus standard NICHD protocols. Behavioral Sciences & the Law, 32(6), 762–774. https://doi.org/10.1002/bsl.2149

Alaggia, R., & Mishna, F. (2014). Self psychology and male child sexual abuse: Healing relational betrayal. Clinical Social Work Journal, 42(1), 41–48. https://doi.org/10.1007/s10615-013-0453-2

Ali, S. A. (2012). Sample size calculation and sampling techniques. Journal of the Pakistan Medical Association, 62(6), 624–626. https://jpma.org.pk/PdfDownload/3482

American Professional Society on the Abuse of Children. (2019). APSAC practice guidelines: The investigation and determination of suspected psychological maltreatment of children and adolescents. https://bit.ly/3jI7AhJ

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.).

Arslan, G. (2017). Psychological maltreatment, coping strategies, and mental health problems: A brief and effective measure of psychological maltreatment in adolescents. Child Abuse & Neglect, 68, 96–106. https://doi.org/10.1016/j.chiabu.2017.03.023

Ayre, C., & Scally, A. J. (2014). Critical values for Lawshe’s content validity ratio: Revisiting the original methods of calculation. Measurement and Evaluation in Counseling and Development, 47(1), 79–86. https://doi.org/10.1177%2F0748175613513808

Bernard, C., & Harris, P. (2018). Serious case reviews: The lived experience of Black children. Child & Family Social Work, 24(2), 256–263. https://doi.org/10.1111/cfs.12610

Bernstein, R. E., Measelle, J. R., Laurent, H. K., Musser, E. D., & Ablow, J. C. (2013). Sticks and stones may break my bones but words relate to adult physiology? Child abuse experience and women’s sympathetic nervous system response while self-reporting trauma. Journal of Aggression, Maltreatment & Trauma, 22(10), 1117–1136. https://doi.org/10.1080/10926771.2013.850138

Bialer, P. A., & McIntosh, C. A. (2016). Discrimination, stigma, and hate: The impact on the mental health and well-being of LGBT people. Journal of Gay & Lesbian Mental Health, 20(4), 297–298. https://doi.org/10.1080/19359705.2016.1211887

Bos, H., de Haas, S., & Kuyper, L. (2019). Lesbian, gay, and bisexual adults: Childhood gender nonconformity, childhood trauma, and sexual victimization. Journal of Interpersonal Violence, 34(3), 496–515. https://doi.org/10.1177%2F0886260516641285

Briere, J. (1996). Trauma Symptom Checklist for Children (TSCC), professional manual. Psychological Assessment Resources.

Briere, J., & Wherry, J. (2016). Development and validation of the TSCC Screening Form (TSCC-SF) and TSCYC Screening Form (TSCYC-SF). Psychological Assessment Resources.

Brown, S. M., Bender, K., Orsi, R., McCrae, J. S., Phillips, J. D., & Rienks, S. (2019). Adverse childhood experiences and their relationship to complex health profiles among child welfare–involved children: A classification and regression tree analysis. Health Services Research, 54(4), 902–911. https://doi.org/10.1111/1475-6773.13166

Centers for Disease Control. (2012). Child abuse and neglect cost the United States $124 billion [Press release]. https://bit.ly/3jYbpAF

Dakil, S. R., Cox, M., Lin, H., & Flores, G. (2011). Racial and ethnic disparities in physical abuse reporting and Child Protective Services interventions in the United States. Journal of the National Medical Association, 103(9–10), 926–931. https://doi.org/10.1016/S0027-9684(15)30449-1

Easton, S. D. (2012). Disclosure of child sexual abuse among adult male survivors. Clinical Social Work Journal, 41, 344–355. https://doi.org/10.1007/s10615-012-0420-3

Fang, X., Brown, D. S., Florence, C. S., & Mercy, J. A. (2012). The economic burden of child maltreatment in the United States and implications for prevention. Child Abuse & Neglect, 36(2), 156–165. https://doi.org/10.1016/j.chiabu.2011.10.006

Felitti, V. J., Anda, R. F., Nordenberg, D., Williamson, D. F., Spitz, A. M., Edwards, V., Koss, M. P., & Marks, J. S. (1998). Relationship of childhood abuse and household dysfunction to many of the leading causes of death in adults: The Adverse Childhood Experiences (ACE) study. American Journal of Preventive Medicine, 14(4), 245–258. https://doi.org/10.1016/S0749-3797(98)00017-8

Field, T. A., Jones, L. K., & Russell-Chapin, L. A. (Eds.). (2017). Neurocounseling: Brain-based clinical approaches. American Counseling Association.

Font, S. A., & Maguire-Jack, K. (2015). Decision-making in Child Protective Services: Influences at multiple levels of the social ecology. Child Abuse & Neglect, 47, 70–82. https://doi.org/10.1016/j.chiabu.2015.02.005

Glaser, D. (2002). Emotional abuse and neglect (psychological maltreatment): A conceptual framework. Child Abuse & Neglect, 26(6–7), 697–714. https://doi.org/10.1016/S0145-2134(02)00342-3

Greenfield, E. A., & Marks, N. F. (2010). Identifying experiences of physical and psychological violence in childhood that jeopardize mental health in adulthood. Child Abuse & Neglect, 34(3), 161–171. https://doi.org/10.1016/j.chiabu.2009.08.012

Hart, S. N., & Glaser, D. (2011). Psychological maltreatment – Maltreatment of the mind: A catalyst for advancing child protection toward proactive primary prevention and promotion of personal well-being. Child Abuse & Neglect, 35(10), 758–766. https://doi.org/10.1016/j.chiabu.2011.06.002

Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., & Wang, K. T. (2016). Research design in counseling (4th ed.). Cengage.

Iwaniec, D. (2006). The emotionally abused and neglected child: Identification, assessment and intervention: A practice handbook (2nd ed.). Wiley.

Klika, J. B., & Conte, J. R. (Eds.). (2017). The APSAC handbook on child maltreatment (4th ed.). SAGE.

Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x

López, M., Fluke, J. D., Benbenishty, R., & Knorth, E. J. (2015). Commentary on decision-making and judgments in child maltreatment prevention and response: An overview. Child Abuse & Neglect, 49, 1–11. https://doi.org/10.1016/j.chiabu.2015.08.013

Maguire, S. A., Williams, B., Naughton, A. M., Cowley, L. E., Tempest, V., Mann, M. K., Teague, M., & Kemp, A. M. (2015). A systematic review of the emotional, behavioural and cognitive features exhibited by school-aged children experiencing neglect or emotional abuse. Child: Care, Health and Development, 41(5), 641–653. https://doi.org/10.1111/cch.12227

Marshall, N. A. (2012). A clinician’s guide to recognizing and reporting parental psychological maltreatment of children. Professional Psychology: Research and Practice, 43(2), 73–79. https://doi.org/10.1037/a0026677

Mills, R., Scott, J., Alati, R., O’Callaghan, M., Najman, J. M., & Strathearn, L. (2013). Child maltreatment and adolescent mental health problems in a large birth cohort. Child Abuse & Neglect, 37(5), 292–302. https://doi.org/10.1016/j.chiabu.2012.11.008

National Children’s Alliance. (2016). Putting standards into practice: A guide to implementing the 2017 standards for accredited members (revised 2016). http://www.nationalchildrensalliance.org/wp-content/uploads/2015/06/NCA2017-StandardsIntoPractice-web.pdf

Neukrug, E. S., & Fawcett, R. C. (2015). The essentials of testing and assessment: A practical guide for counselors, social workers, and psychologies, enhanced (3rd ed.). Cengage.

Peterson, C., Florence, C., & Klevens, J. (2018). The economic burden of child maltreatment in the United States, 2015. Child Abuse & Neglect, 86, 178–183.
https://doi.org/10.1016/j.chiabu.2018.09.018

Porges, S. W. (2011). The polyvagal theory: Neurophysiological foundations of emotions, attachment, communication, and self-regulation. W. W. Norton.

Raosoft. (2004). Sample size calculator. http://www.raosoft.com/samplesize.html

Raparia, E., Coplan, J. D., Abdallah, C. G., Hof, P. R., Mao, X., Mathew, S. J., & Shungu, D. C. (2016). Impact of childhood emotional abuse on neocortical neurometabolites and complex emotional processing in patients with generalized anxiety disorder. Journal of Affective Disorders, 190, 414–423. https://doi.org/10.1016/j.jad.2015.09.019

Readable. (n.d.). https://readable.com

Siegle, R. (2017). Educational research basics: Excel spreadsheet to calculate instrument reliability estimates. https://researchbasics.education.uconn.edu/excel-spreadsheet-to-calculate-instrument-reliability-estimates

Sivagurunathan, M., Orchard, T., & Evans, M. (2019). Barriers to utilization of mental health services amongst male child sexual abuse survivors: Service providers’ perspective. Journal of Child Sexual Abuse, 28(7), 819–839. https://doi.org/10.1080/10538712.2019.1610823

Siyez, D. M., Esen, E., Seymenler, S., & Öztürk, B. (2020). Development of wellness scale for emerging adults: Validity and reliability study. Current Psychology.
https://doi.org/10.1007/s12144-020-00672-w

Slep, A. M. S., Heyman, R. E., & Foran, H. M. (2015). Child maltreatment in DSM-5 and ICD-11. Family Process, 54(1), 17–32. https://doi.org/10.1111/famp.12131

Spinazzola, J., Hodgdon, H., Liang, L.-J., Ford, J. D., Layne, C. M., Pynoos, R., Briggs, E. C., Stolbach, B., & Kisiel, C. (2014). Unseen wounds: The contribution of psychological maltreatment to child and adolescent mental health and risk outcomes. Psychological Trauma: Theory, Research, Practice, and Policy, 6(Suppl 1), S18–S28. https://doi.org/10.1037/a0037766

Sullivan, G. M., & Feinn, R. (2012). Using effect size—or why the p value is not enough. Journal of Graduate Medical Education, 4(3), 279–282. https://doi.org/10.4300/JGME-D-12-00156.1

Thompson, R., Briggs, E., English, D. J., Dubowitz, H., Lee, L.-C., Brody, K., Everson, M. D., & Hunter, W. M. (2005). Suicidal ideation among 8-year-olds who are maltreated and at risk: Findings from the LONGSCAN studies. Child Maltreatment, 10(1), 26–36.  https://doi.org/10.1177%2F1077559504271271

Tonmyr, L., Draca, J., Crain, J., & MacMillian, H. L. (2011). Measurement of emotional/psychological child maltreatment: A review. Child Abuse & Neglect, 35(10), 767–782.
https://doi.org/10.1016/j.chiabu.2011.04.011

Tyrka, A. R., Burgers, D. E., Philip, N. S., Price, L. H., & Carpenter, L. L. (2013). The neurobiological correlates of childhood adversity and implications for treatment. Acta Psychiatrica Scandinavica, 128(6), 434–447. https://doi.org/10.1111/acps.12143

U.S. Census Bureau. (2020). Quick facts. https://www.census.gov

U.S. Department of Health & Human Services. (2018). Child maltreatment 2016 (27th ed.). https://www.acf.hhs.gov/sites/default/files/documents/cb/cm2016.pdf

Vachon, D. D., Krueger, R. F., Rogosch, F. A., & Cicchetti, D. (2015). Assessment of the harmful psychiatric and behavioral effects of different forms of child maltreatment. JAMA Psychiatry, 72(11), 1135–1142. https://doi.org/10.1001/jamapsychiatry.2015.1792

van der Kolk, B. (2014). The body keeps the score: Brain, mind, and body in the healing of trauma. Penguin Books.

van Harmelen, A.-L., Elzinga, B. M., Kievit, R. A., & Spinhoven, P. (2011). Intrusions of autobiographical memories in individuals reporting childhood emotional maltreatment. European Journal of Psychotraumatology, 2(1), 7336. https://doi.org/10.3402/ejpt.v2i0.7336

van Harmelen, A.-L., van Tol, M.-J., van der Wee, N. J. A., Veltman, D. J., Aleman, A., Spinhoven, P., van Buchem, M. A., Zitman, F. G., Penninx, B. W. J. H., & Elzinga, B. M. (2010). Reduced medial prefrontal cortex volume in adults reporting childhood emotional maltreatment. Biological Psychiatry, 68(9), 832–838. https://doi.org/10.1016/j.biopsych.2010.06.011

Walsh, K., & Brandon, L. (2011). Their children’s first educators: Parents’ views about child sexual abuse prevention education. Journal of Child and Family Studies, 21, 734–746.
https://doi.org/10.1007/s10826-011-9526-4

Warner, R. M. (2013). Applied statistics: From bivariate through multivariate techniques (2nd ed.). SAGE.

Wherry, J. N., Baldwin, S., Junco, K., & Floyd, B. (2013). Suicidal thoughts/behaviors in sexually abused children. Journal of Child Sexual Abuse, 22(5), 534–551. https://doi.org/10.1080/10538712.2013.800938

Wherry, J. N., & Dunlop, C. E. (2018). TSCC and TSCYC screening forms in a clinical sample: Reliability, validity, and creating local clinical norms. Child Maltreatment, 23(1), 74–84.
https://doi.org/10.1177%2F1077559517725207

Zimman, L. (2009). ‘The other kind of coming out’: Transgender people and the coming out narrative genre. Gender and Language, 3(1), 53–80. https://doi.org/10.1558/genl.v3i1.53

Zimmerman, F., & Mercy, J. (2010). A better start: Child maltreatment prevention as a public
health priority. Zero to Three, 30(5), 4–10.

 

Alison M. Boughn, PhD, NCC, LIMHP (NE), LMHC (IA), LPC-MH (SD), ATR-BC, QMHP, TF-CBT, is an assistant professor and counseling department chair at Wayne State College. Daniel A. DeCino, PhD, NCC, LPC, is an assistant professor and Interim Program Coordinator at the University of South Dakota. Correspondence may be addressed to Alison M. Boughn, Wayne State College, 1111 Main Street, Wayne, NE 68787, albough1@wsc.edu.

Validation of the Adapted Response to Stressful Experiences Scale (RSES-4) Among First Responders

Warren N. Ponder, Elizabeth A. Prosek, Tempa Sherrill

 

First responders are continually exposed to trauma-related events. Resilience is evidenced as a protective factor for mental health among first responders. However, there is a lack of assessments that measure the construct of resilience from a strength-based perspective. The present study used archival data from a treatment-seeking sample of 238 first responders to validate the 22-item Response to Stressful Experiences Scale (RSES-22) and its abbreviated version, the RSES-4, with two confirmatory factor analyses. Using a subsample of 190 first responders, correlational analyses were conducted of the RSES-22 and RSES-4 with measures of depressive symptoms, post-traumatic stress, anxiety, and suicidality confirming convergent and criterion validity. The two confirmatory analyses revealed a poor model fit for the RSES-22; however, the RSES-4 demonstrated an acceptable model fit. Overall, the RSES-4 may be a reliable and valid measure of resilience for treatment-seeking first responder populations.

Keywords: first responders, resilience, assessment, mental health, confirmatory factor analysis

 

     First responder populations (i.e., law enforcement, emergency medical technicians, and fire rescue) are often repeatedly exposed to traumatic and life-threatening conditions (Greinacher et al., 2019). Researchers have concluded that such critical incidents could have a deleterious impact on first responders’ mental health, including the development of symptoms associated with post-traumatic stress, anxiety, depression, or other diagnosable mental health disorders (Donnelly & Bennett, 2014; Jetelina et al., 2020; Klimley et al., 2018; Weiss et al., 2010). In a systematic review, Wild et al. (2020) suggested the promise of resilience-based interventions to relieve trauma-related psychological disorders among first responders. However, they noted the operationalization and measure of resilience as limitations to their intervention research. Indeed, researchers have conflicting viewpoints on how to define and assess resilience. For example, White et al. (2010) purported popular measures of resilience rely on a deficit-based approach. Counselors operate from a strength-based lens (American Counseling Association [ACA], 2014) and may prefer measures with a similar perspective. Additionally, counselors are mandated to administer assessments with acceptable psychometric properties that are normed on populations representative of the client (ACA, 2014, E.6.a., E.7.d.). For counselors working with first responder populations, resilience may be a factor of importance; however, appropriately measuring the construct warrants exploration. Therefore, the focus of this study was to validate a measure of resilience with strength-based principles among a sample of first responders.

Risk and Resilience Among First Responders

In a systematic review of the literature, Greinacher et al. (2019) described the incidents that first responders may experience as traumatic, including first-hand life-threatening events; secondary exposure and interaction with survivors of trauma; and frequent exposure to death, dead bodies, and injury. Law enforcement officers (LEOs) reported that the most severe critical incidents they encounter are making a mistake that injures or kills a colleague; having a colleague intentionally killed; and making a mistake that injures or kills a bystander (Weiss et al., 2010). Among emergency medical technicians (EMTs), critical incidents that evoked the most self-reported stress included responding to a scene involving family, friends, or others to the crew and seeing someone dying (Donnelly & Bennett, 2014). Exposure to these critical incidents may have consequences for first responders. For example, researchers concluded first responders may experience mental health symptoms as a result of the stress-related, repeated exposure (Jetelina et al., 2020; Klimley et al., 2018; Weiss et al., 2010). Moreover, considering the cumulative nature of exposure (Donnelly & Bennett, 2014), researchers concluded first responders are at increased risk for post-traumatic stress disorder (PTSD), depression, and generalized anxiety symptoms (Jetelina et al., 2020; Klimley et al., 2018; Weiss et al., 2010). Symptoms commonly experienced among first responders include those associated with post-traumatic stress, anxiety, and depression.

In a collective review of first responders, Kleim and Westphal (2011) determined a prevalence rate for PTSD of 8%–32%, which is higher than the general population lifetime rate of 6.8–7.8 % (American Psychiatric Association [APA], 2013; National Institute of Mental Health [NIMH], 2017). Some researchers have explored rates of PTSD by specific first responder population. For example, Klimley et al. (2018) concluded that 7%–19% of LEOs and 17%–22% of firefighters experience PTSD. Similarly, in a sample of LEOs, Jetelina and colleagues (2020) reported 20% of their participants met criteria for PTSD.

Generalized anxiety and depression are also prevalent mental health symptoms for first responders. Among a sample of firefighters and EMTs, 28% disclosed anxiety at moderate–severe and several levels (Jones et al., 2018). Furthermore, 17% of patrol LEOs reported an overall prevalence of generalized anxiety disorder (Jetelina et al., 2020). Additionally, first responders may be at higher risk for depression (Klimley et al., 2018), with estimated prevalence rates of 16%–26% (Kleim & Westphal, 2011). Comparatively, the past 12-month rate of major depressive disorder among the general population is 7% (APA, 2013). In a recent study, 16% of LEOs met criteria for major depressive disorder (Jetelina et al., 2020). Moreover, in a sample of firefighters and EMTs, 14% reported moderate–severe and severe depressive symptoms (Jones et al., 2018). Given these higher rates of distressful mental health symptoms, including post-traumatic stress, generalized anxiety, and depression, protective factors to reduce negative impacts are warranted.

Resilience
     Broadly defined, resilience is “the ability to adopt to and rebound from change (whether it is from stress or adversity) in a healthy, positive and growth-oriented manner” (Burnett, 2017, p. 2). White and colleagues (2010) promoted a positive psychology approach to researching resilience, relying on strength-based characteristics of individuals who adapt after a stressor event. Similarly, other researchers explored how individuals’ cognitive flexibility, meaning-making, and restoration offer protection that may be collectively defined as resilience (Johnson et al., 2011).

A key element among definitions of resilience is one’s exposure to stress. Given their exposure to trauma-related incidents, first responders require the ability to cope or adapt in stressful situations (Greinacher et al., 2019). Some researchers have defined resilience as a strength-based response to stressful events (Burnett, 2017), in which healthy coping behaviors and cognitions allow individuals to overcome adverse experiences (Johnson et al., 2011; White et al., 2010). When surveyed about positive coping strategies, first responders most frequently reported resilience as important to their well-being (Crowe et al., 2017).

Researchers corroborated the potential impact of resilience for the population. For example, in samples of LEOs, researchers confirmed resilience served as a protective factor for PTSD (Klimley et al., 2018) and as a mediator between social support and PTSD symptoms (McCanlies et al., 2017). In a sample of firefighters, individual resilience mediated the indirect path between traumatic events and global perceived stress of PTSD, along with the direct path between traumatic events and PTSD symptoms (Lee et al., 2014). Their model demonstrated that those with higher levels of resilience were more protected from traumatic stress. Similarly, among emergency dispatchers, resilience was positively correlated with positive affect and post-traumatic growth, and negatively correlated with job stress (Steinkopf et al., 2018). The replete associations of resilience as a protective factor led researchers to develop resilience-based interventions. For example, researchers surmised promising results from mindfulness-based resilience interventions for firefighters (Joyce et al., 2019) and LEOs (Christopher et al., 2018). Moreover, Antony and colleagues (2020) concluded that resilience training programs demonstrated potential to reduce occupational stress among first responders.

Assessment of Resilience
     Recognizing the significance of resilience as a mediating factor in PTSD among first responders and as a promising basis for interventions when working with LEOs, a reliable means to measure it among first responder clients is warranted. In a methodological review of resilience assessments, Windle and colleagues (2011) identified 19 different measures of resilience. They found 15 assessments were from original development and validation studies with four subsequent validation manuscripts from their original assessment, of which none were developed with military or first responder samples.

Subsequently, Johnson et al. (2011) developed the Response to Stressful Experiences Scale (RSES-22) to assess resilience among military populations. Unlike deficit-based assessments of resilience, they proposed a multidimensional construct representing how individuals respond to stressful experiences in adaptive or healthy ways. Cognitive flexibility, meaning-making, and restoration were identified as key elements when assessing for individuals’ characteristics connected to resilience when overcoming hardships. Initially they validated a five-factor structure for the RSES-22 with military active-duty and reserve components. Later, De La Rosa et al. (2016) re-examined the RSES-22. De La Rosa and colleagues discovered a unidimensional factor structure of the RSES-22 and validated a shorter 4-item subset of the instrument, the RSES-4, again among military populations.

It is currently unknown if the performance of the RSES-4 can be generalized to first responder populations. While there are some overlapping experiences between military populations and first responders in terms of exposure to trauma and high-risk occupations, the Substance Abuse and Mental Health Services Administration (SAMHSA; 2018) suggested differences in training and types of risk. In the counseling profession, these populations are categorized together, as evidenced by the Military and Government Counseling Association ACA division. Additionally, there may also be dual identities within the populations. For example, Lewis and Pathak (2014) found that 22% of LEOs and 15% of firefighters identified as veterans. Although the similarities of the populations may be enough to theorize the use of the same resilience measure, validation of the RSES-22 and RSES-4 among first responders remains unexamined.

Purpose of the Study
     First responders are repeatedly exposed to traumatic and stressful events (Greinacher et al., 2019) and this exposure may impact their mental health, including symptoms of post-traumatic stress, anxiety, depression, and suicidality (Jetelina et al., 2020; Klimley et al., 2018). Though most measures of resilience are grounded in a deficit-based approach, researchers using a strength-based approach proposed resilience may be a protective factor for this population (Crowe et al., 2017; Wild et al., 2020). Consequently, counselors need a means to assess resilience in their clinical practice from a strength-based conceptualization of clients.

Johnson et al. (2011) offered a non-deficit approach to measuring resilience in response to stressful events associated with military service. Thus far, researchers have conducted analyses of the RSES-22 and RSES-4 with military populations (De La Rosa et al., 2016; Johnson et al., 2011; Prosek & Ponder, 2021), but not yet with first responders. While there are some overlapping characteristics between the populations, there are also unique differences that warrant research with discrete sampling (SAMHSA, 2018). In light of the importance of resilience as a protective factor for mental health among first responders, the purpose of the current study was to confirm the reliability and validity of the RSES-22 and RSES-4 when utilized with this population. In the current study, we hypothesized the measures would perform similarly among first responders and if so, the RSES-4 would offer counselors a brief assessment option in clinical practice that is both reliable and valid.

Method

Participants
     Participants in the current non-probability, purposive sample study were first responders (N = 238) seeking clinical treatment at an outpatient, mental health nonprofit organization in the Southwestern United States. Participants’ mean age was 37.53 years (SD = 10.66). The majority of participants identified as men (75.2%; n = 179), with women representing 24.8% (n = 59) of the sample. In terms of race and ethnicity, participants identified as White (78.6%; n = 187), Latino/a (11.8%; n = 28), African American or Black (5.5%; n = 13), Native American (1.7%; n = 4), Asian American (1.3%; n = 3), and multiple ethnicities (1.3%; n = 3). The participants identified as first responders in three main categories: LEO (34.9%; n = 83), EMT (28.2%; n = 67), and fire rescue (25.2%; n = 60). Among the first responders, 26.9% reported previous military affiliation. As part of the secondary analysis, we utilized a subsample (n = 190) that was reflective of the larger sample (see Table 1).

Procedure
     The data for this study were collected between 2015–2020 as part of the routine clinical assessment procedures at a nonprofit organization serving military service members, first responders, frontline health care workers, and their families. The agency representatives conduct clinical assessments with clients at intake, Session 6, Session 12, and Session 18 or when clinical services are concluded. We consulted with the second author’s Institutional Review Board, which determined the research as exempt, given the de-identified, archival nature of the data. For inclusion in this analysis, data needed to represent first responders, ages 18 or older, with a completed RSES-22 at intake. The RSES-4 are four questions within the RSES-22 measure; therefore, the participants did not have to complete an additional measure. For the secondary analysis, data from participants who also completed other mental health measures at intake were also included (see Measures).

 

Table 1

Demographics of Sample

Characteristic Sample 1

(N = 238)

Sample 2

(n = 190)

Age (Years)
    Mean 37.53 37.12
    Median 35.50 35.00
    SD 10.66 10.30
    Range 46 45
Time in Service (Years)
    Mean 11.62 11.65
    Median 10.00 10.00
    SD   9.33   9.37
    Range   41 39
n (%)
First Responder Type
    Emergency Medical
Technicians
67 (28.2%) 54 (28.4%)
    Fire Rescue 60 (25.2%) 45 (23.7%)
    Law Enforcement 83 (34.9%) 72 (37.9%)
    Other  9 (3.8%) 5 (2.6%)
    Two or more 10 (4.2%) 6 (3.2%)
    Not reported  9 (3.8%) 8 (4.2%)
Gender
    Women   59 (24.8%)   47 (24.7%)
    Men 179 (75.2%) 143 (75.3%)
Ethnicity
    African American/Black 13 (5.5%) 8 (4.2%)
    Asian American   3 (1.3%) 3 (1.6%)
    Latino(a)/Hispanic  28 (11.8%) 24 (12.6%)
    Multiple Ethnicities  3 (1.3%) 3 (1.6%)
    Native American  4 (1.7%) 3 (1.6%)
    White 187 (78.6%) 149 (78.4%)

Note. Sample 2 is a subset of Sample 1. Time in service for Sample 1, n = 225;
time in service for Sample 2, n = 190.

 

Measures
Response to Stressful Experiences Scale
     The Response to Stressful Experiences Scale (RSES-22) is a 22-item measure to assess dimensions of resilience, including meaning-making, active coping, cognitive flexibility, spirituality, and self-efficacy (Johnson et al., 2011). Participants respond to the prompt “During and after life’s most stressful events, I tend to” on a 5-point Likert scale from 0 (not at all like me) to 4 (exactly like me). Total scores range from 0 to 88 in which higher scores represent greater resilience. Example items include see it as a challenge that will make me better, pray or meditate, and find strength in the meaning, purpose, or mission of my life. Johnson et al. (2011) reported the RSES-22 demonstrates good internal consistency (α = .92) and test-retest reliability (α = .87) among samples from military populations. Further, the developers confirmed convergent, discriminant, concurrent, and incremental criterion validity (see Johnson et al., 2011). In the current study, Cronbach’s alpha of the total score was .93. 

Adapted Response to Stressful Experiences Scale
     The adapted Response to Stressful Experiences Scale (RSES-4) is a 4-item measure to assess resilience as a unidimensional construct (De La Rosa et al., 2016). The prompt and Likert scale are consistent with the original RSES-22; however, it only includes four items: find a way to do what’s necessary to carry on, know I will bounce back, learn important and useful life lessons, and practice ways to handle it better next time. Total scores range from 0 to 16, with higher scores indicating greater resilience. De La Rosa et al. (2016) reported acceptable internal consistency (α = .76–.78), test-retest reliability, and demonstrated criterion validity among multiple military samples. In the current study, the Cronbach’s alpha of the total score was .74.

Patient Health Questionnaire-9
     The Patient Health Questionnaire-9 (PHQ-9) is a 9-item measure to assess depressive symptoms in the past 2 weeks (Kroenke et al., 2001). Respondents rate the frequency of their symptoms on a 4-point Likert scale ranging from 0 (not at all) to 3 (nearly every day). Total scores range from 0 to 27, in which higher scores indicate increased severity of depressive symptoms. Example items include little interest or pleasure in doing things and feeling tired or having little energy. Kroenke et al. (2001) reported good internal consistency (α = .89) and established criterion and construct validity. In this sample, Cronbach’s alpha of the total score was .88.

PTSD Checklist-5
     The PTSD Checklist-5 (PCL-5) is a 20-item measure for the presence of PTSD symptoms in the past month (Blevins et al., 2015). Participants respond on a 5-point Likert scale indicating frequency of PTSD-related symptoms from 0 (not at all) to 4 (extremely). Total scores range from 0 to 80, in which higher scores indicate more severity of PTSD-related symptoms. Example items include repeated, disturbing dreams of the stressful experience and trouble remembering important parts of the stressful experience. Blevins et al. (2015) reported good internal consistency (α = .94) and determined convergent and discriminant validity. In this sample, Cronbach’s alpha of the total score was .93.

Generalized Anxiety Disorder-7
     The Generalized Anxiety Disorder-7 (GAD-7) is a 7-item measure to assess for anxiety symptoms over the past 2 weeks (Spitzer et al., 2006). Participants rate the frequency of the symptoms on a 4-point Likert scale ranging from 0 (not at all) to 3 (nearly every day). Total scores range from 0 to 21 with higher scores indicating greater severity of anxiety symptoms. Example items include not being able to stop or control worrying and becoming easily annoyed or irritable. Among patients from primary care settings, Spitzer et al. (2006) determined good internal consistency (α = .92) and established criterion, construct, and factorial validity. In this sample, Cronbach’s alpha of the total score was .91.

Suicidal Behaviors Questionnaire-Revised
     The Suicidal Behaviors Questionnaire-Revised (SBQ-R) is a 4-item measure to assess suicidality (Osman et al., 2001). Each item assesses a different dimension of suicidality: lifetime ideation and attempts, frequency of ideation in the past 12 months, threat of suicidal behaviors, and likelihood of suicidal behaviors (Gutierrez et al., 2001). Total scores range from 3 to 18, with higher scores indicating more risk of suicide. Example items include How often have you thought about killing yourself in the past year? and How likely is it that you will attempt suicide someday? In a clinical sample, Osman et al. (2001) reported good internal consistency (α = .87) and established criterion validity. In this sample, Cronbach’s alpha of the total score was .85.

Data Analysis
     Statistical analyses were conducted using SPSS version 26.0 and SPSS Analysis of Moment Structures (AMOS) version 26.0. We examined the dataset for missing values, replacing 0.25% (32 of 12,836 values) of data with series means. We reviewed descriptive statistics of the RSES-22 and RSES-4 scales. We determined multivariate normality as evidenced by skewness less than 2.0 and kurtosis less than 7.0 (Dimitrov, 2012). We assessed reliability for the scales by interpreting Cronbach’s alphas and inter-item correlations to confirm internal consistency.

We conducted two separate confirmatory factor analyses to determine the model fit and factorial validity of the 22-item measure and adapted 4-item measure. We used several indices to conclude model fit: minimum discrepancy per degree of freedom (CMIN/DF) and p-values, root mean residual (RMR), goodness-of-fit index (GFI), comparative fit index (CFI), Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA). According to Dimitrov (2012), values for the CMIN/DF < 2.0,p > .05, RMR < .08, GFI > .90, CFI > .90, TLI > .90, and RMSEA < .10 provide evidence of a strong model fit. To determine criterion validity, we assessed a subsample of participants (n = 190) who had completed the RSES-22, RSES-4, and four other psychological measures (i.e., PHQ-9, PCL-5, GAD-7, and SBQ-R). We determined convergent validity by conducting bivariate correlations between the RSES-22 and RSES-4.

Results

Descriptive Analyses
     We computed means, standard deviations, 95% confidence interval (CI), and score ranges for the RSES-22 and RSES-4 (Table 2). Scores on the RSES-22 ranged from 19–88. Scores on the RSES-4 ranged from 3–16. Previous researchers using the RSES-22 on military samples reported mean scores of 57.64–70.74 with standard deviations between 8.15–15.42 (Johnson et al., 2011; Prosek & Ponder, 2021). In previous research of the RSES-4 with military samples, mean scores were 9.95–11.20 with standard deviations between 3.02–3.53(De La Rosa et al., 2016; Prosek & Ponder, 2021).

 

Table 2

Descriptive Statistics for RSES-22 and RSES-4

Variable M SD 95% CI Score Range
RSES-22 scores 60.12 13.76 58.52, 61.86 19–88
RSES-4 scores 11.66 2.62 11.33, 11.99 3–16

Note. N = 238. RSES-22 = Response to Stressful Experiences Scale 22-item; RSES-4 = Response
to Stressful Experiences Scale 4-item adaptation.


Reliability Analyses
     To determine the internal consistency of the resiliency measures, we computed Cronbach’s alphas. For the RSES-22, we found strong evidence of inter-item reliability (α = .93), which was consistent with the developers’ estimates (α = .93; Johnson et al., 2011). For the RSES-4, we assessed acceptable inter-item reliability (α = .74), which was slightly lower than previous estimates (α = .76–.78; De La Rosa et al., 2016). We calculated the correlation between items and computed the average of all the coefficients. The average inter-item correlation for the RSES-22 was .38, which falls within the acceptable range (.15–.50). The average inter-item correlation for the RSES-4 was .51, slightly above the acceptable range. Overall, evidence of internal consistency was confirmed for each scale. 

Factorial Validity Analyses
     We conducted two confirmatory factor analyses to assess the factor structure of the RSES-22 and RSES-4 for our sample of first responders receiving mental health services at a community clinic (Table 3). For the RSES-22, a proper solution converged in 10 iterations. Item loadings ranged between .31–.79, with 15 of 22 items loading significantly ( > .6) on the latent variable. It did not meet statistical criteria for good model fit: χ2 (209) = 825.17, p = .000, 90% CI [0.104, 0.120]. For the RSES-4, a proper solution converged in eight iterations. Item loadings ranged between .47–.80, with three of four items loading significantly ( > .6) on the latent variable. It met statistical criteria for good model fit: χ2 (2) = 5.89, p = .053, 90% CI [0.000, 0.179]. The CMIN/DF was above the suggested < 2.0 benchmark; however, the other fit indices indicated a model fit.

 

Table 3

Confirmatory Factor Analysis Fit Indices for RSES-22 and RSES-4

Variable df χ2 CMIN/DF RMR GFI CFI TLI RMSEA 90% CI
RSES-22 209 825.17/.000 3.95 .093 .749 .771 0.747 .112 0.104, 0.120
RSES-4    2    5.89/.053 2.94 .020 .988 .981 0.944 .091 0.000, 0.179

Note. N = 238. RSES-22 = Response to Stressful Experiences Scale 22-item; RSES-4 = Response to Stressful Experiences Scale 4-item adaptation; CMIN/DF = Minimum Discrepancy per Degree of Freedom; RMR = Root Mean Square Residual;
GFI = Goodness-of-Fit Index; CFI = Comparative Fit Index; TLI = Tucker-Lewis Index; RMSEA = Root Mean Squared Error of Approximation.

 

Criterion and Convergent Validity Analyses
     To assess for criterion validity of the RSES-22 and RSES-4, we conducted correlational analyses with four established psychological measures (Table 4). We utilized a subsample of participants (n = 190) who completed the PHQ-9, PCL-5, GAD-7, and SBQ-R at intake. Normality of the data was not a concern because analyses established appropriate ranges for skewness and kurtosis (± 1.0). The internal consistency of the RSES-22 (α = .93) and RSES-4 (α = .77) of the subsample was comparable to the larger sample and previous studies. The RSES-22 and RSES-4 related to the psychological measures of distress in the expected direction, meaning measures were significantly and negatively related, indicating that higher resiliency scores were associated with lower scores of symptoms associated with diagnosable mental health disorders (i.e., post-traumatic stress, anxiety, depression, and suicidal behavior). We verified convergent validity with a correlational analysis of the RSES-22 and RSES-4, which demonstrated a significant and positive relationship.

 

Table 4

Criterion and Convergent Validity of RSES-22 and RSES-4

M (SD) Cronbach’s α RSES-22 PHQ-9 PCL-5 GAD-7 SBQ-R
RSES-22 60.16 (14.17) .93 −.287* −.331* −.215* −.346*
RSES-4 11.65 (2.68) .77 .918 −.290* −.345* −.220* −.327*

Note. n = 190. RSES-22 = Response to Stressful Experiences Scale 22-item; RSES-4 = Response to Stressful Experiences Scale 4-item adaptation; PHQ-9 = Patient Health Questionnaire-9;
PCL-5 = PTSD Checklist-5; GAD-7 = Generalized Anxiety Disorder-7; SBQ-R = Suicidal Behaviors Questionnaire-Revised.
*p < .01.

 

Discussion

The purpose of this study was to validate the factor structure of the RSES-22 and the abbreviated RSES-4 with a first responder sample. Aggregated means were similar to those in the articles that validated and normed the measures in military samples (De La Rosa et al., 2016; Johnson et al., 2011; Prosek & Ponder, 2021). Additionally, the internal consistency was similar to previous studies. In the original article, Johnson et al. (2011) proposed a five-factor structure for the RSES-22, which was later established as a unidimensional assessment after further exploratory factor analysis (De La Rosa et al., 2016). Subsequently, confirmatory factor analyses with a treatment-seeking veteran population revealed that the RSES-22 demonstrated unacceptable model fit, whereas the RSES-4 demonstrated a good model fit (Prosek & Ponder, 2021). In both samples, the RSES-4 GFI, CFI, and TLI were all .944 or higher, whereas the RSES-22 GFI, CFI, and TLI were all .771 or lower. Additionally, criterion and convergent validity as measured by the PHQ-9, PCL-5, and GAD-7 in both samples were extremely close. Similarly, in this sample of treatment-seeking first responders, confirmatory factor analyses indicated an inadequate model fit for the RSES-22 and a good model fit for the RSES-4. Lastly, convergent and criterion validity were established with correlation analyses of the RSES-22 and RSES-4 with four other standardized assessment instruments (i.e., PHQ-9, PCL-5, GAD-7, SBQ-R). We concluded that among the first responder sample, the RSES-4 demonstrated acceptable psychometric properties, as well as criterion and convergent validity with other mental health variables (i.e., post-traumatic stress, anxiety, depression, and suicidal behavior).

Implications for Clinical Practice
     First responders are a unique population and are regularly exposed to trauma (Donnelly & Bennett, 2014; Jetelina et al., 2020; Klimley et al., 2018; Weiss et al., 2010). Although first responders could potentially benefit from espousing resilience, they are often hesitant to seek mental health services (Crowe et al., 2017; Jones, 2017). The RSES-22 and RSES-4 were originally normed with military populations. The results of the current study indicated initial validity and reliability among a first responder population, revealing that the RSES-4 could be useful for counselors in assessing resilience.

It is important to recognize that first responders have perceived coping with traumatic stress as an individual process (Crowe et al., 2017) and may believe that seeking mental health services is counter to the emotional and physical training expectations of the profession (Crowe et al., 2015). Therefore, when first responders seek mental health care, counselors need to be prepared to provide culturally responsive services, including population-specific assessment practices and resilience-oriented care.

Jones (2017) encouraged a comprehensive intake interview and battery of appropriate assessments be conducted with first responder clients. Counselors need to balance the number of intake questions while responsibly assessing for mental health comorbidities such as post-traumatic stress, anxiety, depression, and suicidality. The RSES-4 provides counselors a brief, yet targeted assessment of resilience.

Part of what cultural competency entails is assessing constructs (e.g., resilience) that have been shown to be a protective factor against PTSD among first responders (Klimley et al., 2018). Since the items forming the RSES-4 were developed to highlight the positive characteristics of coping (Johnson et al., 2011), rather than a deficit approach, this aligns with the grounding of the counseling profession. It is also congruent with first responders’ perceptions of resilience. Indeed, in a content analysis of focus group interviews with first responders, participants defined resilience as a positive coping strategy that involves emotional regulation, perseverance, personal competence, and physical fitness (Crowe et al., 2017).

The RSES-4 is a brief, reliable, and valid measure of resilience with initial empirical support among a treatment-seeking first responder sample. In accordance with the ACA (2014) Code of Ethics, counselors are to administer assessments normed with the client population (E.8.). Thus, the results of the current study support counselors’ use of the measure in practice. First responder communities are facing unprecedented work tasks in response to COVID-19. Subsequently, their mental health might suffer (Centers for Disease Control and Prevention, 2020) and experts have recommended promoting resilience as a protective factor for combating the negative mental health consequences of COVID-19 (Chen & Bonanno, 2020). Therefore, the relevance of assessing resilience among first responder clients in the current context is evident.

Limitations and Future Research
     This study is not without limitations. The sample of first responders was homogeneous in terms of race, ethnicity, and gender. Subsamples of first responders (i.e., LEO, EMT, fire rescue) were too small to conduct within-group analyses to determine if the factor structure of the RSES-22 and RSES-4 would perform similarly. Also, our sample of first responders included two emergency dispatchers. Researchers reported that emergency dispatchers should not be overlooked, given an estimated 13% to 15% of emergency dispatchers experience post-traumatic symptomatology (Steinkopf et al., 2018). Future researchers may develop studies that further explore how, if at all, emergency dispatchers are represented in first responder research.

Furthermore, future researchers could account for first responders who have prior military service. In a study of LEOs, Jetelina et al. (2020) found that participants with military experience were 3.76 times more likely to report mental health concerns compared to LEOs without prior military affiliation. Although we reported the prevalence rate of prior military experience in our sample, the within-group sample size was not sufficient for additional analyses. Finally, our sample represented treatment-seeking first responders. Future researchers may replicate this study with non–treatment-seeking first responder populations.

Conclusion
     First responders are at risk for sustaining injuries, experiencing life-threatening events, and witnessing harm to others (Lanza et al., 2018). The nature of their exposure can be repeated and cumulative over time (Donnelly & Bennett, 2014), indicating an increased risk for post-traumatic stress, anxiety, and depressive symptoms, as well as suicidal behavior (Jones et al., 2018). Resilience is a promising protective factor that promotes wellness and healthy coping among first responders (Wild et al., 2020), and counselors may choose to routinely measure for resilience among first responder clients. The current investigation concluded that among a sample of treatment-seeking first responders, the original factor structure of the RSES-22 was unstable, although it demonstrated good reliability and validity. The adapted version, RSES-4, demonstrated good factor structure while also maintaining acceptable reliability and validity, consistent with studies of military populations (De La Rosa et al., 2016; Johnson et al., 2011; Prosek & Ponder, 2021). The RSES-4 provides counselors with a brief and strength-oriented option for measuring resilience with first responder clients.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

American Counseling Association. (2014). ACA code of ethics.

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.).

Antony, J., Brar, R., Khan, P. A., Ghassemi, M., Nincic, V., Sharpe, J. P., Straus, S. E., & Tricco, A. C. (2020). Interventions for the prevention and management of occupational stress injury in first responders: A rapid overview of reviews. Systematic Reviews, 9(121), 1–20. https://doi.org/10.1186/s13643-020-01367-w

Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., & Domino, J. L. (2015). The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of Traumatic Stress, 28(6), 489–498. https://doi.org/10.1002/jts.22059

Burnett, H. J., Jr. (2017). Revisiting the compassion fatigue, burnout, compassion satisfaction, and resilience connection among CISM responders. Journal of Police Emergency Response, 7(3), 1–10. https://doi.org/10.1177/2158244017730857

Centers for Disease Control and Prevention. (2020, June 30). Coping with stress. https://www.cdc.gov/coronavirus/2019-ncov/daily-life-coping/managing-stress-anxiety.html

Chen, S., & Bonanno, G. A. (2020). Psychological adjustment during the global outbreak of COVID-19: A resilience perspective. Psychological Trauma: Theory, Research, Practice, and Policy, 12(S1), S51–S54. https://doi.org/10.1037/tra0000685

Christopher, M. S., Hunsinger, M., Goerling, R. J., Bowen, S., Rogers, B. S., Gross, C. R., Dapolonia, E., & Pruessner, J. C. (2018). Mindfulness-based resilience training to reduce health risk, stress reactivity, and aggression among law enforcement officers: A feasibility and preliminary efficacy trial. Psychiatry Research, 264, 104–115. https://doi.org/10.1016/j.psychres.2018.03.059

Crowe, A., Glass, J. S., Lancaster, M. F., Raines, J. M., & Waggy, M. R. (2015). Mental illness stigma among first responders and the general population. Journal of Military and Government Counseling, 3(3), 132–149. http://mgcaonline.org/wp-content/uploads/2013/02/JMGC-Vol-3-Is-3.pdf

Crowe, A., Glass, J. S., Lancaster, M. F., Raines, J. M., & Waggy, M. R. (2017). A content analysis of psychological resilience among first responders. SAGE Open, 7(1), 1–9. https://doi.org/10.1177/2158244017698530

De La Rosa, G. M., Webb-Murphy, J. A., & Johnston, S. L. (2016). Development and validation of a brief measure of psychological resilience: An adaptation of the Response to Stressful Experiences Scale. Military Medicine, 181(3), 202–208. https://doi.org/10.7205/MILMED-D-15-00037

Dimitrov, D. M. (2012). Statistical methods for validation of assessment scale data in counseling and related fields. American Counseling Association.

Donnelly, E. A., & Bennett, M. (2014). Development of a critical incident stress inventory for the emergency medical services. Traumatology, 20(1), 1–8. https://doi.org/10.1177/1534765613496646

Greinacher, A., Derezza-Greeven, C., Herzog, W., & Nikendei, C. (2019). Secondary traumatization in first responders: A systematic review. European Journal of Psychotraumatology, 10(1), 1562840. https://doi.org/10.1080/20008198.2018.1562840

Gutierrez, P. M., Osman, A., Barrios, F. X., & Kopper, B. A. (2001). Development and initial validation of the Self-Harm Behavior Questionnaire. Journal of Personality Assessment, 77(3), 475–490. https://doi.org/10.1207/S15327752JPA7703_08

Jetelina, K. K., Mosberry, R. J., Gonzalez, J. R., Beauchamp, A. M., & Hall, T. (2020). Prevalence of mental illnesses and mental health care use among  police officers. JAMA Network Open, 3(10), 1–12. https://doi.org/10.1001/jamanetworkopen.2020.19658

Johnson, D. C., Polusny, M. A., Erbes, C. R., King, D., King, L., Litz, B. T., Schnurr, P. P., Friedman, M., Pietrzak, R. H., & Southwick, S. M. (2011). Development and initial validation of the Response to Stressful Experiences Scale. Military Medicine, 176(2), 161–169. https://doi.org/10.7205/milmed-d-10-00258

Jones, S. (2017). Describing the mental health profile of first responders: A systematic review. Journal of the American Psychiatric Nurses Association, 23(3), 200–214. https://doi.org/10.1177/1078390317695266

Jones, S., Nagel, C., McSweeney, J., & Curran, G. (2018). Prevalence and correlates of psychiatric symptoms among first responders in a Southern state. Archives of Psychiatric Nursing, 32(6), 828–835. https://doi.org/10.1016/j.apnu.2018.06.007

Joyce, S., Tan, L., Shand, F., Bryant, R. A., & Harvey, S. B. (2019). Can resilience be measured and used to predict mental health symptomology among first responders exposed to repeated trauma? Journal of Occupational and Environmental Medicine, 61(4), 285–292. https://doi.org/10.1097/JOM.0000000000001526

Kleim, B., & Westphal, M. (2011). Mental health in first responders: A review and recommendation for prevention and intervention strategies. Traumatology, 17(4), 17–24. https://doi.org/10.1177/1534765611429079

Klimley, K. E., Van Hasselt, V. B., & Stripling, A. M. (2018). Posttraumatic stress disorder in police, firefighters, and emergency dispatchers. Aggression and Violent Behavior, 43, 33–44.
https://doi.org/10.1016/j.avb.2018.08.005

Kroenke, K., Spitzer, R. L., & Williams, J. B. W. (2001). The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine, 16, 606–613. https://doi.org/10.1046/j.1525-1497.2001.016009606.x

Lanza, A., Roysircar, G., & Rodgers, S. (2018). First responder mental healthcare: Evidence-based prevention, postvention, and treatment. Professional Psychology: Research and Practice, 49(3), 193–204. https://doi.org/10.1037/pro0000192

Lee, J.-S., Ahn, Y.-S., Jeong, K.-S. Chae, J.-H., & Choi, K.-S. (2014). Resilience buffers the impact of traumatic events on the development of PTSD symptoms in firefighters. Journal of Affective Disorders, 162, 128–133. https://doi.org/10.1016/j.jad.2014.02.031

Lewis, G. B., & Pathak, R. (2014). The employment of veterans in state and local government service. State and Local Government Review, 46(2), 91–105. https://doi.org/10.1177/0160323X14537835

McCanlies, E. C., Gu, J. K., Andrew, M. E., Burchfiel, C. M., & Violanti, J. M. (2017). Resilience mediates the relationship between social support and post-traumatic stress symptoms in police officers. Journal of Emergency Management, 15(2), 107–116. https://doi.org/10.5055/jem.2017.0319

National Institute of Mental Health. (2017). Post-traumatic stress disorder. https://www.nimh.nih.gov/health/statistics/post-traumatic-stress-disorder-ptsd.shtml

Osman, A., Bagge, C. L., Gutierrez, P. M., Konick, L. C., Kopper, B. A., & Barrios, F. X. (2001). The Suicidal Behaviors Questionnaire–revised (SBQ-R): Validation with clinical and nonclinical samples. Assessment, 8(4), 443–454. https://doi.org/10.1177/107319110100800409

Prosek, E. A., & Ponder, W. N. (2021). Validation of the Adapted Response to Stressful Experiences Scale (RSES-4) among veterans [Manuscript submitted for publication].

Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder (The GAD-7). Archives of Internal Medicine, 166(10), 1092–1097.
https://doi.org/10.1001/archinte.166.10.1092

Steinkopf, B., Reddin, R. A., Black, R. A., Van Hasselt, V. B., & Couwels, J. (2018). Assessment of stress and resiliency in emergency dispatchers. Journal of Police and Criminal Psychology, 33(4), 398–411.
https://doi.org /10.1007/s11896-018-9255-3

Substance Abuse and Mental Health Services Administration. (2018, May). First responders: Behavioral health concerns, emergency response, and trauma. Disaster Technical Assistance Center Supplemental Research Bulletin. https://www.samhsa.gov/sites/default/files/dtac/supplementalresearchbulletin-firstresponders-may2018.pdf

Weiss, D. S., Brunet, A., Best, S. R., Metzler, T. J., Liberman, A., Pole, N., Fagan, J. A., & Marmar, C. R. (2010). Frequency and severity approaches to indexing exposure to trauma: The Critical Incident History Questionnaire for police officers. Journal of Traumatic Stress, 23(6), 734–743.
https://doi.org/10.1002/jts.20576

White, B., Driver, S., & Warren, A. M. (2010). Resilience and indicators of adjustment during rehabilitation from a spinal cord injury. Rehabilitation Psychology, 55(1), 23–32. https://doi.org/10.1037/a0018451

Wild, J., El-Salahi, S., Degli Esposti, M., & Thew, G. R. (2020). Evaluating the effectiveness of a group-based resilience intervention versus psychoeducation for emergency responders in England: A randomised controlled trial. PLoS ONE, 15(11), e0241704.  https://doi.org/10.1371/journal.pone.0241704

Windle, G., Bennett, K. M., & Noyes, J. (2011). A methodological review of resilience measurement scales. Health and Quality of Life Outcomes, 9, Article 8, 1–18. https://doi.org/10.1186/1477-7525-9-8

 

Warren N. Ponder, PhD, is Director of Outcomes and Evaluation at One Tribe Foundation. Elizabeth A. Prosek, PhD, NCC, LPC, is an associate professor at Penn State University. Tempa Sherrill, MS, LPC-S, is the founder of Stay the Course and a volunteer at One Tribe Foundation. Correspondence may be addressed to Warren N. Ponder, 855 Texas St., Suite 105, Fort Worth, TX 76102, warren@1tribefoundation.org.