Lifetime Achievement in Counseling Series: An Interview with Cherylene McClain Tucker

Joshua D. Smith, Neal D. Gray

Each year TPC presents an interview with an influential veteran in counseling as part of its Lifetime Achievement in Counseling series. This year I am honored to introduce Cherylene McClain Tucker, supervisor of a day treatment program and a lifelong learner and advocate. In this interview, she shares how her experiences in criminal justice, addictions counseling, and mental health counseling intersect to support the mental health and wellness of the whole person. I am grateful to Dr. Joshua Smith and Dr. Neal Gray for highlighting the ongoing contributions of leaders in the profession for the TPC readership. —Richelle Joe, Editor

     Cherylene McClain Tucker, NCC, MAC, LPC, LCDC, is a Program Supervisor with the Tarrant County Community Supervision and Corrections Department (CSCD) in Fort Worth, Texas. She holds a Bachelor of Science in criminal justice from St. John’s University, and Master of Arts degrees in both professional counseling and marriage and family therapy from Amberton University.

Tucker is an active member of several organizations. She is a board member of the Texas Certification Board of Addiction Professionals, and she is a member of the Tarrant County College Mental Health Advisor Committee. Recently, she has been selected to be a mentor with the NBCC Foundation and the Association for Addiction Professionals (NAADAC) Minority Fellowship Program for Addiction Counselors, where she will be mentoring future addiction counselors.

Tucker has also received several awards: the 2016 Counselor of the Year Award from the local chapter of the Texas Association of Addiction Professionals; the 2016 Elves Smith Counselor of the Year Award from the State Board of the Texas Association of Addiction Professionals; and the 2017 Lora Roe Memorial Addiction Counselor of the Year Award from NAADAC.

Prior to her current position, Tucker has worked with the addicted population as a case manager, as an addiction counselor in a hospital setting, and in the criminal justice system as a parole officer.

In Tucker’s current position, she is the program supervisor over the day treatment program in an intensive treatment program within adult probation. She currently oversees eight different modalities of treatment that address substance use disorders, mental health issues, and cognitive distortions. Tucker also collaborates with stakeholders in the community to assist probationers with gaining autonomy and becoming pro-social members of their community.

  1. What led you to pursue a degree in counseling compared to other helping professions?

      What initially led me to the helping professions was my academic interest in criminal justice. While pursuing my undergraduate degree at St. John’s University, I completed an internship with Nassau County Probation Department. Here I observed the DWI Unit. It was suggested that if I wanted to pursue a career in probation, I needed some work history in social service. It was suggested the best place to do this was working in foster care. I took the suggestion; I obtained a job at Catholic Home Bureau. This is where my passion was awakened.

I began working with adults caught in the grips of addiction in 1987 as a caseworker in New York City for the Catholic Home Bureau Agency. This was the peak of the crack epidemic. This was also the era when HIV was still an unknown disease. Early on I saw how addiction impacted the lives of people and how their families were being destroyed. Working as a caseworker, I felt I was not doing enough to help and desired to help this population more. I returned to school to acquire my substance abuse training at Molloy College in 1991 as a Credentialed Alcoholism Counselor (CAC). In 1993, I began working in the therapeutic field of addiction as an addiction counselor at Kings County Hospital, in Brooklyn, New York. Here I was able to help those caught in the grips of addiction from various areas of life, not just foster care.

Many of my clients had lengthy histories of abuse, neglect, mental health issues, or involvement in the criminal justice or foster care setting. This encouraged me to want to learn more and pursue my graduate studies. In 2009, I returned to school and obtained my master’s degree in professional counseling, and I returned again in 2016 and obtained my master’s degree in marriage and family therapy. I became a Licensed Professional Counselor in 2018.

  1. Recently, you were awarded the Lora Roe Memorial Addiction Counselor of the Year Award from NAADAC, the Association for Addiction Professionals. What has been your experience working in both mental health and addiction settings? What challenges or barriers have you encountered as a counselor in this area?

As I mentioned before, working with addiction intrigued me. There were so many different facets of addiction. As I began to understand addiction and alcohol and substance use disorders in the DSM, I noticed clients coming into treatment for their addiction had endured long histories of untreated mental health issues. A lot of the referrals from social service agencies were of people who had endured untreated trauma histories. Those mandated to treatment by the criminal justice system many times had untreated and undiagnosed mental health issues.

One of the barriers I encountered early on was not being a dual-licensed counselor and not being able to address those co-occurring disorders because I was only an addiction counselor, licensed to only treat substance use disorders. I knew in order to be effective, I needed to treat the whole person and not just the addiction portion. This gave me the drive to pursue a higher-level education and licensure in order to treat the whole person. A challenge I recognized was that once a person left to pursue a higher level of education, they would pursue a higher level of pay, which many times is not being offered in a substance use disorder treatment setting. I worked many years in a treatment setting, and because I did not have a master’s-level degree or license, my salary did not match my years of experience. This did not deter me from the field. My passion for helping people causes me to stay in this field. Counseling has given me the ability to help people find their hope and develop coping skills to manage their emotions. However, I know that many of my peers have left the field due to the low level of pay.

  1. In your view, what can be done, or needs to change, to address or overcome these challenges and barriers? Specifically, there has been a push in more recent years for addiction counseling to require graduate-level training. How does this help or hinder the profession and the clients we serve?

I want to start by saying, I am grateful for my formative years I had at Kings County Hospital. Working as an addiction counselor in the trenches gave me my foundation in addiction counseling. This is where I knew I was called to do this career. What I think needs to happen is that there needs to be more incentives for counselors who are working in addiction, especially those who transition from working as a non–master’s-level counselor to a master’s-level counselor. There is a significant difference in pay when working as a non–master’s-level counselor, as opposed to being a master’s-level counselor working in mental health. While in graduate school, there were not as many conversations about working in addiction as there were about working in mental health once you became a fully licensed counselor. I understand that when you complete graduate school, many students have debt and they are eager to become recognizable therapists. Working in the trenches with people is very hard. However, if there was more emphasis on the rewards of working in addiction as opposed to the war stories, there may be more of an interest for clinicians coming into the field. The rewards of working in addiction are helping the families, not just the identified client, and creating safety in communities. When people get sober, they commit fewer crimes and this reduces recidivism. It creates a better economy. When we diminish drug use in communities, those sober individuals return to the work force. I think it would be great if there was more of an emphasis on addiction counseling in graduate-level training. A higher level of course work brings value. I believe this would allow the retention level of staff to be more consistent. Being a master’s-level counselor also allows insurance companies and consumers to invest in treatment that has higher skilled professionals, and this increases that monetary value of the job—another component that supports staff retention.

  1. You also have a strong background in the field of criminal justice and corrections. In your opinion, how do drug reformation and policy changes to criminalization impact the criminal justice system and addiction counseling? Have you seen any advancements in care and rehabilitation as a result of these changes?

Drug reformation and policy changes for the use of marijuana and the continuing rise of opioids impact the criminal justice system greatly. Drug addiction impacts a myriad of things. It impacts the individual, their family, the community, the judicial system, and health care, just to name a few. The local criminal justice system is designed to protect and serve the community. In the past, professionals in law enforcement and the criminal justice system lacked education and knowledge about addiction and mental health, which has caused many problems, especially in minority communities. I do believe today that many law enforcement agencies and criminal justice agencies are improving. They are hiring more professionals with knowledge of addiction and mental health and establishing collaborative relationships. SAMHSA offers a training for the criminal justice community, “How Being Trauma-Informed Improves Criminal Justice System Responses.” Several community supervision and corrections departments are now training their staff to be trauma informed. On a local judiciary level, because drug reformation has become an issue, many marijuana laws are being reviewed and how these will be managed legally. This continues to be an ongoing concern.

  1. As counseling professionals, we have a duty to promote social justice and advocate on behalf of our clients and profession. What has been your experience in this area and what shifts have you noticed within the profession and socially to illustrate this commitment?

As a Licensed Professional Counselor working in the criminal justice system for the past 17 years, I have had the opportunity on a regular basis to advocate for clients. In addition to my various duties as a program supervisor over the intensive treatment program at Tarrant County CSCD, I collaborate with two specific courts: FAIP (Felony Alcohol Intervention Project) Court and DWI Misdemeanor Court. In both courts, I am the therapist that offers input during court discussions with the judiciary, attorneys, and officers regarding substance use disorders and mental health as it pertains to clients/probationers. There are other courts within Tarrant County CSCD that collaborate with the judiciary, attorneys, officers, and counselors. What is most rewarding is that the judiciary welcomes the voice of the clinicians in the courtroom, and they value our feedback.

For example, there have been several clients who were experiencing a lot of anxiety. As a result, they were using illicit substances to manage their anxiety. During different court conferences, the judge asked me for my thoughts and feedback. We agreed that I would meet with these clients while they were in treatment in our Intensive Outpatient Program. Upon meeting with these clients, it was evident that they needed to meet with their medical doctor or psychiatrist. The clients were agreeable to this. Once the client was seen by their primary doctor or psychiatrist, we were able to explore the origin of the anxiety and those things that triggered the anxiety. I was able to share with the judge the progress of the clients. The judge was very patient with these clients and allowed these clients to work through some of these issues. Clients were allowed to heal and improve their cognition, causing them to stop using illicit substances and be successful on their probation.

  1. What has been your experience when interacting with national and local organizations, such as ACA, NAADAC, NBCC, etc.? Do you feel supported by professional organizations or leaders, and has this changed throughout your career?

I am honored to say that I am a member of TCA (Texas Counselors Association), NAADAC (Addiction Professionals) and its local branch (TAAP-Texas Association of Addiction Professionals), and hold certifications from NBCC (National Board for Certified Counselors). Each of these organizations are diligently working on behalf of the counseling profession and for the counselors. The organizations keep us abreast of legislative changes and create policies and implement trainings that support counselors. I feel these organizations are key elements that help to better our profession.

  1. Throughout your years of practice, what has been your experience when collaborating with other mental health, addiction, and medical professionals? How would you describe coordination of care and treatment options currently as we continue to navigate COVID-19 pandemic–related concerns?

I believe over the years, mental health, addiction, and medical professionals have become more collaborative. Here in North Texas, there are several collaborations that are working together to serve the client. Recently, I was selected to be a stakeholder and to be on the Community Advisor Board with a research project with Texas Christian University (TCU) that is working with our local city hospital, the mental health community, and the criminal justice community to address issues with those who have been infected with HIV and have an opioid use disorder within the criminal justice system. Here the researchers are looking at creating seamless lines for this population of people to receive services. TCU has created a community medical mobile unit to offer services to people in lower socioeconomic communities that are involved in the criminal justice system and those who are receiving mental health services. As a representative in the criminal justice community that offers therapeutic services, they can offer services to our clients. There are other collaborative services that are being offered in the community—connecting the local city hospitals and the mental health community and bridging these gaps in services. The increase of teletherapy has allowed services to go on uninterrupted during the height of the COVID-19 pandemic.

  1. For future mental health and/or addiction counselors, what advice would you have regarding their involvement in advancement and future development of the profession?

My advice for future clinicians—once you identify your passion, continue to be a forever learner. Our field is ever evolving. Working in addictions, new drugs are always on the rise. We must stay on top of things as changes are coming about. When I started in this profession in 1987 as a caseworker, the DSM-III was the clinical reference. By the time I became an addiction counselor in 1993, the DSM-III-TR was the clinical reference. Here we are in 2023, and the new clinical reference is the DSM-5-TR. Participating with local or national associations allows us to be a part of transitions within and around our profession. Create a voice in our profession that helps to support future clinicians.

 

This concludes the eighth interview for the annual Lifetime Achievement in Counseling Series. TPC is grateful to Joshua D. Smith, PhD, NCC, LCMHC, and Neal D. Gray, PhD, LCMHC-S, for providing this interview. Joshua D. Smith is an assistant professor at the University of Mount Olive. Neal D. Gray is a professor at Lenoir-Rhyne University. Correspondence can be emailed to Joshua Smith at jsmith@umo.edu.

Book Review—College Counseling and Student Development: Theory, Practice, and Campus Collaboration

edited by Derrick A. Paladino, Laura M. Gonzalez, and Joshua C. Watson

College students today face unique complexity in their world, distinct from the experience of any prior generation—such is the premise of College Counseling and Student Development: Theory, Practice, and Campus Collaboration as it undertakes both to resource and orient today’s college professionals. The text leverages the collective expertise of a diverse group of authors to supply a range and depth of information pertinent to the topic.

Several core chapters set the tone by describing the three waves of student development theory. The contributors provide relevant research and critique, helping developing professionals to consider the multiple possible frameworks from which to conceptualize students. The book appears well-suited to an audience of student counselors who can relate its material to personal experience and their observations of peers’ learning processes in the class environment. The text offers a holistic presentation of college counseling—development of the field, theory, neurobiology, ethics, key diagnostic presentations, and treatment models—reinforcing master’s-level readers’ learning from other courses.

College Counseling and Student Development also ushers developing professionals into the myriad expressions of the college counselor role. Chapters detail university to community college distinctions for each topic; track variance in triage and referral procedures; and spotlight a range of campus initiatives, such as suicide prevention outreach and other population-specific needs. Frequent case examples and application questions enable readers to visualize the differentiation of potential professional roles, for instance, academic advisor vs. career counselor. However, the text also engages the audience of administrators of college counseling centers (CCCs) through targeted resources for effective design of center structure and an organized approach to topics such as crisis policy.

As part of the book’s conceptualization of students, it briefly references a family systems view. A few chapters identify families as key contextual influences on students during their transition into college and young adulthood and consider possible engagement of the parental relationship within student affairs. Even so, the majority of the work frames students individualistically rather than systemically through the highlighted theories and models of treatment. This approach may overlook fully engaging readers of the family systems viewpoint.

However, College Counseling and Student Development appears comprehensive in its content as a whole. A strength of the text across topic areas is its diversity-focused lens, such as examining the distinct experiences of multiheritage students on campus. Similarly, in their presentation of research, the authors prioritize a social justice perspective. They acknowledge areas of possible bias within historic theories, describing how current models seek to supply the gap, including theories specific to college women’s development as learners.

The editors help readers navigate the extensive information innate to the topic through the text’s visual organization, multi-chapter student case studies, and explicit chapter goals. Early chapters on interlocking departments, roles, and resources within the college system establish clear definitions for various terms, though readers may still occasionally find themselves needing to refer back to these initial descriptions for clarity. Although the text format presents as lacking pictures, other than occasional charts, chapters engage readers through self-reflection exercises, transforming a rote perusal of the book into one that integrates reader experience.

College Counseling and Student Development lends itself well to an initial reading, and then as ongoing reference material for new professionals who may review theory frameworks and treatment models as they engage in the hands-on application of their work. The wealth of practical resources includes templates for development of outreach programs, such as student education on eating disorders, hotline numbers for mental health crisis support, and links to articles and webinars for further professional development. Additionally, the book shares free as well as membership-based resources for a variety of CCC demographics and administrative team needs—from behavioral intervention team training and risk evaluation tools to mental health assessments and models of treatment.

The details the authors provide develop readers’ appreciation for the unique niche of, and resources available to, CCCs. This information may encourage professionals to consider what off-campus, outpatient centers can glean from the advances in this micro-community model. Moreover, the text invites counselors on and off campus to conceptualize the students sitting across from them within the college microcosm, considering their challenges, resources, and cultural experiences, distinct from the average community member.

The book underscores the increasing complexity and frequency of mental health issues for the college population. It subsequently challenges the disparity of priority for college counseling in mental health education programs, which lack adequate orientation of counselors for service to this population. The book’s perspective sets a precedent for counselors, administrators, and educators alike to evaluate their respective roles in responding to this discrepancy. As a whole, College Counseling and Student Development: Theory, Practice, and Campus Collaboration represents a comprehensive text, rich with information and resources, orienting and beckoning developing counselors and administrators to the college counseling milieu.

 

Paladino, D. A., Gonzalez, L. M., & Watson, J. C. (2020). College counseling and student development: Theory, practice, and campus collaboration. American Counseling Association.

Reviewed by: Ellie S. Karle, NCC

Reflections on Power From Feminist Women Counselor Educators

Melissa J. Fickling, Matthew Graden, Jodi L. Tangen

The purpose of this phenomenological study was to explore how feminist-identified counselor educators understand and experience power in counselor education. Thirteen feminist women were interviewed. We utilized a loosely structured interview protocol to elicit participant experiences with the phenomenon of power in the context of counselor education. From these data, we identified an essential theme of analysis of power. Within this theme, we identified five categories: (a) definitions and descriptions of power, (b) higher education context and culture, (c) uses and misuses of power, (d) personal development around power, and (e) considerations of potential backlash. These categories and their subcategories are illustrated through narrative synthesis and participant quotations. Findings point to a pressing need for more rigorous self-reflection among counselor educators and counseling leadership, as well as greater accountability for using power ethically.

Keywords: counselor education, power, phenomenological, feminist, women

     The American Counseling Association (ACA; 2014) defined counseling, in part, as “a professional relationship that empowers” (p. 20). Empowerment is a process that begins with awareness of power dynamics (McWhirter, 1994). Power is widely recognized in counseling’s professional standards, competencies, and best practices (ACA, 2014; Association for Counselor Education and Supervision [ACES], 2011; Council for the Accreditation of Counseling and Related Educational Programs [CACREP], 2015) as something about which counselors, supervisors, counselor educators, and researchers should be aware (Bernard & Goodyear, 2014). However, little is known about how power is perceived by counselor educators who, by necessity, operate in many different professional roles with their students
(e.g., teacher, supervisor, mentor).

In public discourse, power may carry different meaning when associated with men or women. According to a Pew Research Center poll (K. Walker et al., 2018) of 4,573 Americans, people are much more likely to use the word “powerful” in a positive way to describe men (67% positive) than women (8% positive). It is possible that these associations are also present among counselors-in-training, professional counselors, and counselor educators.

Dickens and colleagues (2016) found that doctoral students in counselor education are aware of power dynamics and the role of power in their relationships with faculty. Marginalized counselor educators, too, experienced a lack of power in certain academic contexts and noted the salience of their intersecting identities as relevant to the experience of power (Thacker et al., 2021). Thus, faculty members in counselor education may have a large role to play in socializing new professional counselors in awareness of power and positive uses of power, and thus could benefit from openly exploring uses of power in their academic lives.

Feminist Theory and Power in Counseling and Counselor Education
     The concept of power is explored most consistently in feminist literature (Brown, 1994; Miller, 2008). Although power is understood differently in different feminist spaces and disciplinary contexts (Lloyd, 2013), it is prominent, particularly in intersectional feminist work (Davis, 2008). In addition to examining and challenging hegemonic power structures, feminist theory also centers egalitarianism in relationships, attends to privilege and oppression along multiple axes of identity and culture, and promotes engagement in activism for social justice (Evans et al., 2005).

Most research about power in the helping professions to date has been focused on its use in clinical supervision. Green and Dekkers (2010) found discrepancies between supervisors’ and supervisees’ perceptions of power and the degree to which supervisors attend to power in supervision. Similarly, Mangione and colleagues (2011) found another discrepancy in that power was discussed by all the supervisees they interviewed, but it was mentioned by only half of the supervisors. They noted that supervisors tended to minimize the significance of power or express discomfort with the existence of power in supervision.

Whereas most researchers of power and supervision have acknowledged the supervisor’s power, Murphy and Wright (2005) found that both supervisors and supervisees have power in supervision and that when it is used appropriately and positively, power contributed to clinical growth and enhanced the supervisory relationship. Later, in an examination of self-identified feminist multicultural supervisors, Arczynski and Morrow (2017) found that anticipating and managing power was the core organizing category of their participants’ practice. All other emergent categories in their study were different strategies by which supervisors anticipated and managed power, revealing the centrality of power in feminist supervision practice. Given the utility of these findings, it seems important to extend this line of research from clinical supervision to counselor education more broadly because counselor educators can serve as models to students regarding clinical and professional behavior. Thus, understanding the nuances of power could have implications for both pedagogy and clinical practice.

Purpose of the Present Study
     Given the gendered nature of perceptions of power (Rudman & Glick, 2021; K. Walker et al., 2018), and the centrality of power in feminist scholarship (Brown, 1994; Lloyd, 2013; Miller, 2008), we decided to utilize a feminist framework in the design and execution of the present study. Because power appears to be a construct that is widely acknowledged in the helping professions but rarely discussed, we hope to shed light on the meaning and experience of power for counselor educators who identify as feminist. We utilized feminist self-identification as an eligibility criterion with the intention of producing a somewhat homogenous sample of counselor educators who were likely to have thought critically about the construct of power because it figures prominently in feminist theories and models of counseling and pedagogy (Brown, 1994; Lloyd, 2013; Miller, 2008).

Method

We used a descriptive phenomenological methodology to help generate an understanding of feminist faculty members’ lived experiences of power in the context of counselor education (Moustakas, 1994; Padilla-Díaz, 2015). Phenomenological analysis examines the individual experiences of participants and derives from them, via phenomenological reduction, the most meaningful shared elements to paint a portrait of the phenomenon for a group of people (Moustakas, 1994; Starks & Trinidad, 2007). Thus, we share our findings by telling a cohesive narrative derived from the data via themes and subthemes identified by the researchers.

Sample
     After receiving IRB approval, we recruited counselor educators via the CESNET listserv who were full-time faculty members (e.g., visiting, clinical, instructor, tenure-track, tenured) in a graduate-level counseling program. We asked for participants of any gender who self-reported that they integrated a feminist framework into their roles as counselor educators. Thirteen full-time counselor educators who self-identified as feminist agreed to be interviewed on the topic of power. All participants were women. Two feminist-identified men expressed initial interest in participating but did not respond to multiple requests to schedule an interview. The researchers did not systematically collect demographic data, relying instead on voluntary participant self-disclosure of relevant demographics during the interviews. All participants were tenured or tenure-track faculty members. Most were at the assistant professor rank (n = 9), a few were associate professors (n = 3), and one was a full professor who also held various administrative roles during her academic career (e.g., department chair, dean). During the interviews, several participants expressed concern over the high potential for their identification by readers due to their unique identities, locations, and experiences. Thus, participants will be described only in aggregate and only with the demographic identifiers volunteered by them during the interviews. The participants who disclosed their race all shared they were White. Nearly all participants disclosed holding at least one marginalized identity along the axes of age, disability, religion, sexual orientation, or geography.

Procedure
     Once participants gave informed consent, phone interviews were scheduled. After consent to record was obtained, interviewers began the interviews, which lasted between 45–75 minutes. We utilized an unstructured interview format to avoid biasing the data collection to specific domains of counselor education while also aiming to generate the most personal and nuanced understandings of power directly from the participants’ lived experiences (Englander, 2012). As experienced interviewers, we were confident in our ability to actively and meaningfully engage in discourse with participants via the following prompt: “We are interested in understanding power in counselor education. Specifically, please speak to your personal and/or professional development regarding how you think about and use power, and how you see power being used in counselor education.” After the interviews, we all shared the task of transcribing the recordings verbatim, each transcribing several interviews. All potentially identifying information (e.g., names, institutional affiliations) was excluded from the interview transcripts.

Data Analysis
     Data analysis began via horizontalization of two interview transcripts by each author (Moustakas, 1994; Starks & Trinidad, 2007). Next, we began clustering meaning units into potential categories (Moustakas, 1994). This initially revealed 21 potential categories, which we discussed in the first research team meeting. We kept research notes of our meetings, in which we summarized our ongoing data analysis processes (e.g., observations, wonderings, emerging themes). These notes helped us to revisit earlier thinking around thematic clustering and how categories interrelated. The notes did not themselves become raw data from which findings emerged. Through weekly discussions over the course of one year, the primary coders (Melissa Fickling and Matthew Graden) were able to refine the categories through dialoguing until consensus was reached, evidenced by verbal expression of mutual agreement. That is, the primary coders shared power in data analysis and sometimes tabled discussions when consensus was not reached so that each could reflect and rejoin the conversation later. As concepts were refined, early transcripts needed to be re-coded. Our attention was not on the quantification of participants or categories, but on understanding the essence of the experience of power (Englander, 2012; Moustakas, 1994). The themes and subthemes in the findings section below were a fit for all transcripts by the end of data analysis.

Researchers and Trustworthiness
     Fickling and Jodi Tangen are White, cis-hetero women, and at the time of data analysis were pre-tenured counselor educators in their thirties who claimed a feminist approach in their work. Graden was a master’s student and research assistant with scholarly interests in student experiences related to gender in counseling and education. We each possess privileged and marginalized identities, which facilitate certain perspectives and blind spots when it comes to recognizing power. Thus, regular meetings before, during, and after data collection and analysis were crucial to the epoche and phenomenological reduction processes (Moustakas, 1994) in which we shared our assumptions and potential biases. Fickling and Graden met weekly throughout data collection, transcription, and analysis. After the initial research design and data collection, Tangen served primarily as auditor to the coding process by comparing raw data to emergent themes at multiple time points, reviewing the research notes written by Fickling and Graden and contributing to consensus-building dialogues when needed.

Besides remaining cognizant of the strengths and limitations of our individual positionalities with the topic and data, we shared questions and concerns with each other as they arose during data analysis. Relevant to the topic of this study, Fickling served as an administrative supervisor to Graden. This required acknowledgement of power dynamics inherent in that relationship. Graden had been a doctoral student in another discipline prior to this study and thus had firsthand context for much of what was learned about power and its presence in academia. Fickling and Graden’s relationship had not extended into the classroom or clinical supervision, providing a sort of boundary around potential complexities related to any dual relationships. To add additional trustworthiness to the findings below, we utilized thick descriptions to describe the phenomenon of interest while staying close to the data via quotations from participants. Finally, we discuss the impact and importance of the findings by highlighting implications for counselor educators.

Findings

Through the analysis process, we concluded that the essence (Moustakas, 1994)—or core theme—of the experience of power for the participants in this study is engagement in a near constant analysis of power—that of their colleagues, peers, students, as well as of their own power. Participants analyzed interactions of power within and between various contexts and roles. They shared many examples of uses of power—both observed and personally enacted—which influenced their development, as well as their teaching and supervision styles. Through the interviews, participants shared the following:
(a) definitions and descriptions of power, (b) higher education context and culture, (c) uses and misuses of power, (d) personal development around power, and (e) considerations of potential backlash. These five categories comprised the overarching theme of analysis of power and are described below with corresponding subcategories where applicable, identified in italics.

Definitions and Descriptions of Power
     Participants spent much of their time defining and describing just what they meant when they discussed power. For the feminist counselor educators in this study, power is about helping. One participant, when describing power, captured this sentiment well when she said, “I think of the ability to affect change and the ability to have a meaningful impact.” Several participants shared this same idea by talking about power as the ability to have influence. Participants expressed a desire to use power to do good for others rather than to advance their personal aspirations or improve their positions. Use of power for self-promotion was referenced to a far lesser extent than using power to promote justice and equity, and any self-promotional use was generally in response to perceived personal injustice or exploitation. At times, participants described power by what it is not. One participant said, “I don’t see power as a negative. I think it can be used negatively.” Several others shared this sentiment and described power as a responsibility.

In describing power, participants identified feelings of empowerment/disempowerment (Table 1). Disempowerment was described with feeling words that captured a sense of separation and helplessness. Empowerment, on the other hand, was described as feeling energetic and connected. Not only was the language markedly different, but the shifts in vocal expression were also notable (nonverbals were not visible) when participants discussed empowerment versus disempowerment. Disempowerment sounded like defeat (e.g., breathy, monotone, low energy) whereas empowerment sounded like liveliness (e.g., resonant, full intonation, energetic).

Table 1

Empowered and Disempowered Descriptors

                            Descriptors
Empowered Disempowered
Authentic

Free

Good

Heard

Congruent

Genuine

Selfless

Hopeful

Confident

Serene

Connected

Grounded

Energized

Isolated

Disenfranchised

Anxious

Separated Identity

Not Accepted

Disheartened

Helpless

Small

Weak

Invisible

Wasting Energy

Tired

Powered Down

 

Participants identified various types of power, including personal, positional, and institutional power. Personal power was seen as the source of the aspirational kinds of power these participants desired for themselves and others. It can exist regardless of positional or institutional power. Positional power provides the ability to influence decisions, and it is earned over time. The last type of power, institutional, is explored more through the next theme labeled higher education context and culture.

Higher Education Context and Culture
     Because the focus of the study was power within counselor educators’ roles, it was impossible for participants not to discuss the context of their work environments. Thus, higher education context and culture became a salient subtheme in our findings. Higher education culture was described as “the way things are done in institutions of higher learning.” Participants referred to written/spoken and unwritten/silent rules, traditions, expectations, norms, and practices of the academic context as barriers to empowerment, though not insurmountable ones. Power was seen as intimately intertwined with difficult departmental relationships as well as the roles of rank and seniority for nearly all participants. Most also acknowledged the influence of broader sociocultural norms (i.e., local, state, national) on higher education in general, noting that institutions themselves are impacted by power dynamics.

One participant who said that untenured professors have much more power than they realize also said that “power in academia comes with rank.” This contradiction highlights the tension inherent in power, at least among those who wish to use it for the “greater good” (as stated by multiple participants) rather than for personal gain, as these participants expressed.

More than one participant described power as a form of currency in higher education. This shared experience of power as currency, either through having it or not having it, demonstrated that to gain power to do good, as described above, one must be willing or able to be seen as acceptable within the system that assigns power. Boldness was seen by participants as something that can happen once power is gained. Among non-tenured participants, this quote captures the common sentiment: “Now, once I get tenure, that can be a different conversation. I think I would feel more emboldened, more safe, if you will, to confront a colleague in that way.” The discussion of context and boldness led to the emergence of a third theme, which we titled uses and misuses of power.

Uses and Misuses of Power
     Participants provided many examples of their perceptions of uses and misuses of power and linked these behaviors to their sense of ethics. Because many of the examples of uses of power were personal, unique, and potentially identifiable, participants asked that they not be shared individually in this manuscript. Ethical uses of power were described as specific ways in which participants remembered power being used for good such as intervention in unfair policies on behalf of students. Ethical uses of power shared the characteristics of being collaborative and aligned with the descriptors of “feeling empowered” (Table 1).

In contrast, misuses of power were described in terms of being unethical. These behaviors existed on a spectrum that ranged from a simple lack of awareness to a full-blown abuse of power on the most harmful end of the continuum. Lack of awareness of power, for these participants, was observed quite frequently among their counselor education colleagues and they noted that people can negatively affect others without realizing it. In some cases, they reported seeing colleagues lack cultural awareness, competence, or an awareness of privilege. Although many colleagues cognitively know about privilege and speak about it, the lack of awareness referred to here is in terms of the behavioral use of privilege to the detriment of those with less privilege. One example would be to call oneself an LGBTQ+ ally without actively demonstrating ally behavior like confronting homophobic or cis-sexist language in class. Moving along the spectrum, misuses of power were described as unfairly advantaging oneself, possibly at the expense or disadvantage of another. Misuses of power may or may not be directly or immediately harmful but still function to concentrate power rather than share it. An example shared was when faculty members insist that students behave in ways that are culturally inconsistent for that student. At the other end of the spectrum, abuses of power are those behaviors that directly cause harm. Even though abuses of power can be unintentional, participants emphasized that intentions matter less than effects. One participant described abuses of power she had observed as “people using power to make others feel small.” For example, a professor or instructor minimizing students’ knowledge or experiences serves to silence students and leads to a decreased likelihood the student shares, causing classmates to lose out on that connection and knowledge.

One participant shared a culture of ongoing misuses of power by a colleague: “And then they’re [students] all coming to me crying, you know, surreptitiously coming to me in my office, like, ‘Can I talk to you?’ I’m like, ‘Yeah, shut the door. What’d he do now?’ I’m happy to be a safe person for them, it’s an honor, but this is ridiculous.” The irony of feeling powerless to stop another’s misuses of power was not lost on the participants. One participant expressed that she wished to see more colleagues ask questions about their use of power:

We have to ask the question, “What is the impact? What is happening, what are the patterns?” We have to ask questions about access and participation and equity. . . .
And from my perspective, we have to assume that things are jacked up because we know that any system is a microcosm of the outer world, and the outer world is jacked up. So, we have to ask these questions and understand if there’s an adverse impact. And a lot of time there is on marginalized or minoritized populations. So, what are we going to do about it? It’s all well and good to see it, but what are we doing about it, you know? . . . How are you using your power for good?

Personal Development Around Power
     Participants reflected deeply on their own development of their thinking about and use of power. All participants spoke early in the interviews about their training as counselors and counselor educators. Their early training was often where they first fully realized their feminist orientation and recognized a need for greater feminist multicultural dialogue and action in counseling. Participants were all cognizant of their inherent personal power but still not immune to real and perceived attempts to limit their expression of it. In general, participants felt that over time they became more able and willing to use their power in ethical ways. One participant shared the following about her change in understanding power over time:

I’ve never really been a power-focused person, and so I just don’t know that I saw it around me much before that. Which now I realize is a total construct of my privilege—that I’ve never had to see it. Then I started realizing that “Oh, there’s power all around me.” And people obsessed with power all around me. And then once I saw it, I kind of couldn’t un-see it. I think for a long time I went through a process of disillusionment, and I think I still lapse back into that sometimes where I’ll realize like, a lot of the people in positions of power around me are power-hungry or power-obsessed, and they’re using power in all the wrong ways. And maybe they don’t even have an awareness of it. You know, I don’t think everybody who’s obsessed with power knows that about themselves. It almost seems like a compulsion more than anything. And I think that’s super dangerous.

     Nearly all participants reflected on their experiences of powerlessness as students and how they now attempt to empower students as a result of their experiences. Working to build a sense of safety in the classroom was a major behavior that they endorsed, often because of their own feelings of a lack of safety in learning contexts at both master’s and doctoral levels. Vulnerability and risk-taking on the part of the counselor educator were seen as evidence that efforts to create safety in the classroom were successful. Speaking about this, one participant said:

I think it’s actually very unethical and irresponsible as a counselor educator to throw students in a situation where you expect them to take all these risks and not have worked to create community and environments that are conducive to that.

     Participant feelings toward power varied considerably. One said, “I think overall I feel fairly powerful. But I don’t want a lot of power. I don’t like it.” One participant shared, “I am not shy, I am not afraid to speak and so sometimes maybe I do take up too much space, and there are probably times for whatever reason I don’t take up as much space as I should,” showing both humility and a comfort with her own power. These quotes show the care with which the participants came to think about their own power as they gained it through education, position, and rank. No participants claimed to feel total ease in their relationship with their own power, though most acknowledged that with time, they had become more comfortable with acknowledging and using their power when necessary.

One participant said of her ideal expression of power: “Part of feeling powerful is being able to do what I do reasonably well, not perfect, just reasonably well. But also helping to foster the empowerment of other people is just excellent. That’s where it’s at.” This developmental place with her own power aligns with the aspirational definitions and descriptions of power shared above.

Along with their personal development around power, participants shared how their awareness of privileged and marginalized statuses raised their understanding of power. Gender and age were cited by nearly all participants as being relevant to their personal experiences with power. Namely, participants identified the intersection of their gender and young age as being used as grounds for having their contributions or critiques dismissed by their male colleagues. Older age seemed to afford some participants the confidence and power needed to speak up. One participant said:

 We are talking about a profession that is three-quarters women, and we are not socialized to grab power, to take power. And so, I think all of that sometimes is something we need to be mindful of and kind of keep stretching ourselves to address.

Yet when younger participants recalled finding the courage to address power imbalances with their colleagues, the outcome was almost always denial and continued disempowerment. To this point, one participant asked, “How do we get power to matter to people who are already in the positions where they hold power and aren’t interested in doing any self-examination or critical thinking about the subject?”

Finally, power was described as permeating every part of being an educator. To practice her use of power responsibly, one participant said, “I mean every decision I make has to, at some point, consider what my power is with them [the students].” Related to the educator role, in general, participants shared their personal development with gatekeeping, such as:

I think one of the areas that I often feel in my power is around gatekeeping. And I think that is also an area where power can be grossly abused. But I think it’s just such an important part of what we do. And I think one of the ways that I feel in my power around gatekeeping is because it’s something I don’t do alone. I make a point to consult a lot because I don’t want to misuse power, and I think gatekeeping—and, really, like any use of power I think—is stronger when it’s done with others.

     Again, this quote reflects the definition of power that emerged in this study as ideally being “done with others.” Gatekeeping is where participants seemed to be most aware of power and to initially have had the most anxiety around power, but also the area in which they held the most conviction about the intentional use of power. The potential cost of not responsibly using their power in gatekeeping was to future clients, so participants pushed through their discomfort to ensure competent and ethical client care. However, in many cases, participants had to seriously weigh the pros and cons of asserting their personal or positional power, as described in the next and final category.

Considerations of Potential Backlash
     Participants shared about the energy they spent in weighing the potential backlash to their expressions of power, or their calling out of unethical uses of power. Anticipated backlash often resulted in participants not doing or saying something for fear of “making waves” or being labeled a “troublemaker.” Participants described feeling a need to balance confrontations of perceived misuses of power with their desire not to be seen as combative. Those participants who felt most comfortable confronting problematic behaviors cited an open and respectful workplace and self-efficacy in their ability to influence change effectively. For those who did not describe their workplaces as safe and respectful, fear was a common emotion cited when considering whether to take action to challenge a student or colleague. Many described a lack of support from colleagues when they did speak up. Some described support behind the scenes but an unwillingness of peers to be more vocal and public in their opposition to a perceived wrong. Of this, one participant said, “And so getting those voices . . . to the table seems like an uphill battle. I feel like I’m stuck in middle management, in a way.”

Discussion

For the participants in this study, analysis of power is a process of productive tension and fluidity. Participants acknowledged that power exists and a power differential in student–teacher and supervisee–supervisor relationships will almost certainly always be present. Power seemed to be described as an organizing principle in nearly all contexts—professionally, institutionally, departmentally, in the classroom, in supervision, and in personal relationships. Participants found power to be ever present but rarely named (Miller, 2008). Engaging with these data from these participants, it seems that noticing and naming power and its effects is key to facilitating personal and professional development in ways that are truly grounded in equity, multiculturalism, and social justice. Participants affirmed what is stated in guiding frameworks of counseling (ACA, 2014) and counselor education (ACES, 2011; CACREP, 2015) and went beyond a surface acknowledgement of power to a deeper and ongoing process of analysis, like Bernard and Goodyear’s (2014) treatment of power in the supervisory context.

Contemplating, reflecting on, and working with power are worthwhile efforts according to the participants in this study, which is supported by scholarly literature on the topic (Bernard & Goodyear, 2014). Participants’ personal and professional growth seemed to be catalyzed by their awareness of gender and power dynamics. Participants expressed a desire for a greater recognition of the role of power and the ways in which it is distributed in our professional contexts. For example, although mentioned by only two participants, dissatisfaction in professional associations—national, regional, and state—was shared. Specifically, there was a desire to see counselor educators with positional power make deliberate and visible efforts to bring greater diversity into professional-level decisions and discussions in permanent, rather than tokenizing, ways.

The ongoing process of self-analysis that counselors and educators purport to practice seemed not to be enough to ensure that faculty will not misuse power. Though gender and age were highly salient aspects of perceptions of power for these women, neither were clear predictors of their colleagues’ ethical or unethical use of power. Women and/or self-identified feminist counselor educators can and do use power in problematic ways at times. In fact, most participants expressed disappointment in women colleagues and leaders who were unwilling to question power or critically examine their role in status quo power relations. This is consistent with research that indicates that as individual power and status are gained, awareness of power can diminish (Keltner, 2016).

These feminist counselor educators described feelings of empowerment as those that enhance connection and collaboration rather than positionality. In fact, participants’ reports of frustration with some uses of power seemed to be linked to people in leadership positions engaging in power-over moves (Miller, 2008). Participants reported spending a significant amount of energy in deciding whether and when to challenge perceived misuses of power. Confronting leaders seemed to be the riskiest possibility, but confronting peers was also a challenge for many participants. The acknowledgement of context emerged in these data, including a recognition that power works within and between multiple socioecological levels (e.g., microsystems, mesosystems, macrosystems; Bronfenbrenner, 1979). The culture of academia and higher education also contributed to unique considerations of power in the present study, which aligns with the findings of Thacker and colleagues (2021), who noted counselor educator experiences of entrenched power norms are resistant to change.

Contextualizing these findings in current literature is difficult given the lack of work on this topic in counselor education. However, our themes are similar to those found in the supervision literature (Arczynski & Morrow, 2017; Bernard & Goodyear, 2014). The participants in our study were acutely aware of power in their relationships; however, they appeared to feel it even more when in a power-down position. This finding is similar to research in the supervision context in which supervisees felt as though power was not being addressed by their supervisors (Green & Dekkers, 2010). Further, just as the supervisors researched in Mangione et al.’s (2011) study attended to power analysis, our participants strived to examine their power with students. The distinction between positive and negative uses of power was consistent with Murphy and Wright (2005). Participants conceptualized power on a continuum, attended to the power inherent in gatekeeping decisions, managed the tension between collaboration and direction, engaged in reflection around use and misuse of power, and sought transparency in discussions around power. More than anything, though, our participants seemed to continually wrestle with the inherent complexity of power, similarly to what Arczynski and Morrow (2017) found, and how to address, manage, and work with it in a respectful, ethical manner. As opposed to these studies, though, our research addresses a gap between the profession’s acknowledgement of power as a phenomenon and actual lived experiences of power by counselor educators who claim a feminist lens in their work.

Implications
     The implications of our findings are relevant across multiple roles (e.g., faculty, administration, supervision) and levels (e.g., institution, department, program) in counselor education. Power analysis at each level and each role in which counselor educators find themselves could help to uncover issues of power and its uses, both ethical and problematic. The considerable effort that participants described in weighing whether to challenge perceived misuses of power indicates the level of work needed to make power something emotionally and professionally safe to address. Thus, those who find themselves in positions of power or having earned power through tenure and seniority are potentially better situated to invite discussions of power in relatively safe settings such as program meetings or in one-on-one conversations with colleagues. Further, at each hierarchical level, individuals can engage in critical self-reflection while groups can elicit external, independent feedback from people trained to observe and name unjust power structures. Counselor educators should not assume that because they identify as feminist, social justice–oriented, or egalitarian that their professional behavior is always reflective of their aspirations. It is not enough to claim an identity; one must work to let one’s actions and words demonstrate one’s commitment to inclusion through sensitivity to and awareness of power.

Additionally, we encourage counselor educators to ask for feedback from people who will challenge them because self-identification of uses or misuses of power is likely not sufficient to create systemic or even individual change. It is important to acknowledge that power is differentially assigned but can be used well in a culture of collaboration and support. Just as we ask our students to be honest and compassionately critical of their own development, as individuals and as a profession, it seems we could be doing more to foster empowerment through support, collaboration, and honest feedback.

Limitations and Future Directions
     Although not all participants disclosed all their demographic identifiers, one limitation to the current study is the relative homogeneity of the sample across racial and gender lines. The predominance of White women in the present study is of concern, and there are a few possible reasons for this. One is that White women are generally overrepresented in the counseling profession. Baggerly and colleagues (2017) found that women comprised 85% of the student body in CACREP-accredited programs but only 60% of the faculty. These numbers indicate both the high representation of women seeking counseling degrees, but also the degree to which men approach, but do not reach, parity with women in holding faculty positions. Further, in Baggerly et al.’s study, about 88% of faculty members in CACREP-accredited programs were White.

Another potential reason for the apparent racial homogeneity in the present sample is that people of color may not identify with a feminist orientation because of the racist history of feminist movements and so would not have volunteered to participate. Thus, findings must be considered in this context. Future researchers should be vocally inclusive of Black feminist thought (Collins, 1990) and Womanism (A. Walker, 1983) in their research design and recruitment processes to communicate to potential participants an awareness of the intersections of race and gender. Further, future research should explicitly invite those underrepresented here—namely, women of color and men faculty members—to share their experiences with and conceptualizations of power. This will be extremely important as counselor educators work to continue to diversify the profession of counseling in ways that are affirming and supportive for all.

Another limitation is that participants may have utilized socially desirable responses when discussing power and their own behavior. Indeed, the participants identified a lack of self-awareness as common among those who misused power. At the same time, however, the participants in this study readily shared their own missteps, lending credibility to their self-assessments. Future research that asks participants to track their interactions with power in real time via journals or repeated quantitative measures could be useful in eliciting more embodied experiences of power as they arise in vivo. Likewise, students’ experiences of power in their interactions with counselor educators would be useful, particularly as they relate to teaching or gatekeeping, because some research already exists examining power in the context of clinical supervision (Arczynski & Morrow, 2017; Green & Dekkers, 2010; Mangione et al., 2011; Murphy & Wright, 2005).

We initially embarked upon this study with a simple inquiry, wondering about others’ invisible experiences around what felt like a formidable topic. More than anything, our discussions with our participants seemed to indicate a critical need for further exploration of power across hierarchical levels and institutions. We are grateful for our participants’ willingness to share their stories, and we hope that this is just the beginning of a greater dialogue.

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

American Counseling Association. (2014). ACA code of ethics.

Arczynski, A. V., & Morrow, S. L. (2017). The complexities of power in feminist multicultural psychotherapy supervision. Journal of Counseling Psychology, 64(2), 192–205. https://doi.org/10.1037/cou0000179

Association for Counselor Education and Supervision Taskforce on Best Practices in Clinical Supervision. (2011, April). Best practices in clinical supervision. https://acesonline.net/wp-content/uploads/2018/11/ACES-Best-Practices-in-Clinical-Supervision-2011.pdf

Baggerly, J., Tan, T. X., Pichotta, D., & Warner, A. (2017). Race, ethnicity, and gender of faculty members in APA- and CACREP-accredited programs: Changes over five decades. Journal of Multicultural Counseling and Development, 45(4), 292–303. https://doi.org/10.1002/jmcd.12079

Bernard, J. M., & Goodyear, R. K. (2014). Fundamentals of clinical supervision (5th ed.). Pearson.

Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design. Harvard University Press.

Brown, L. S. (1994). Subversive dialogues: Theory in feminist therapy. Basic Books.

Collins, P. H. (2000). Black feminist thought: Knowledge, consciousness, and the politics of empowerment (2nd ed.). Routledge.

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards. https://www.cacrep.org/for-programs/2016-cacrep-standards

Davis, K. (2008). Intersectionality as buzzword: A sociology of science perspective on what makes a feminist theory successful. Feminist Theory, 9(1), 67–85. https://doi.org/10.1177/1464700108086364

Dickens, K. N., Ebrahim, C. H., & Herlihy, B. (2016). Counselor education doctoral students’ experiences with multiple roles and relationships. Counselor Education and Supervision, 55(4), 234–249. https://doi.org/10.1002/ceas.12051

Englander, M. (2012). The interview: Data collection in descriptive phenomenological human scientific research. Journal of Phenomenological Psychology, 43(1), 13–35. https://doi.org/10.1163/156916212X632943

Evans, K. M., Kincade, E. A., Marbley, A. F., & Seem, S. R. (2005). Feminism and feminist therapy: Lessons from the past and hopes for the future. Journal of Counseling & Development, 83(3), 269–277. https://doi.org/10.1002/j.1556-6678.2005.tb00342.x

Green, M. S., & Dekkers, T. D. (2010). Attending to power and diversity in supervision: An exploration of supervisee learning outcomes and satisfaction with supervision. Journal of Feminist Family Therapy, 22(4), 293–312. https://doi.org/10.1080/08952833.2010.528703

Keltner, D. (2016). The power paradox: How we gain and lose influence. Penguin.

Lloyd, M. (2013). Power, politics, domination, and oppression. In G. Waylen, K. Celis, J. Kantola, & S. Laurel Weldon (Eds.), The Oxford handbook of gender and politics (pp. 111–134). Oxford University Press.

Mangione, L., Mears, G., Vincent, W., & Hawes, S. (2011). The supervisory relationship when women supervise women: An exploratory study of power, reflexivity, collaboration, and authenticity. The Clinical Supervisor, 30(2), 141–171. https://doi.org/10.1080/07325223.2011.604272

McWhirter, E. H. (1994). Counseling for empowerment. American Counseling Association.

Miller, J. B. (2008). Telling the truth about power. Women & Therapy, 31(2–4), 145–161.
https://doi.org/10.1080/02703140802146282

Moustakas, C. (1994). Phenomenological research methods. SAGE.

Murphy, M. J., & Wright, D. W. (2005). Supervisees’ perspectives of power use in supervision. Journal of Marital and Family Therapy, 31(3), 283–295.
https://doi.org/10.1111/j.1752-0606.2005.tb01569.x

Padilla-Díaz, M. (2015). Phenomenology in educational qualitative research: Philosophy as science or philosophical science? International Journal of Educational Excellence, 1(2), 101–110. https://documento.uagm.edu/cupey/ijee/ijee_padilla_diaz_1_2_101-110.pdf

Rudman, L. A., & Glick, P. (2021). The social psychology of gender: How power and intimacy shape gender relations (2nd ed.). Guilford.

Starks, H., & Trinidad, S. B. (2007). Choose your method: A comparison of phenomenology, discourse analysis, and grounded theory. Qualitative Health Research, 17(10), 1372–1380.
https://doi.org/10.1177%2F1049732307307031

Thacker, N. E., Barrio Minton, C. A., & Riley, K. B. (2021). Marginalized counselor educators’ experiences negotiating identity: A narrative inquiry. Counselor Education and Supervision, 60(2), 94–111. https://doi.org/10.1002/ceas.12198

Walker, A. (1983). In search of our mothers’ gardens: Womanist prose. Harcourt Brace.

Walker, K., Bialik, K., & van Kessel, P. (2018). Strong men, caring women: How Americans describe what society values (and doesn’t) in each gender. https://www.pewsocialtrends.org/interactives/strong-men-caring-women

Melissa J. Fickling, PhD, ACS, BC-TMH, LCPC, is an associate professor at Northern Illinois University. Matthew Graden, MSEd, is a professional school counselor. Jodi L. Tangen is an associate professor at North Dakota State University. Correspondence may be addressed to Melissa J. Fickling, 1425 W. Lincoln Hwy, Gabel 200, DeKalb, IL 60115, mfickling@niu.edu.

Evaluating the Impact of Solution-Focused Brief Therapy on Hope and Clinical Symptoms With Latine Clients

Krystle Himmelberger, James Ikonomopoulos, Javier Cavazos Vela

We implemented a single-case research design (SCRD) with a small sample (N = 2) to assess the effectiveness of solution-focused brief therapy (SFBT) for Latine clients experiencing mental health concerns. Analysis of participants’ scores on the Dispositional Hope Scale (DHS) and Outcome Questionnaire (OQ-45.2) using split-middle line of progress visual trend analysis, statistical process control charting, percentage of non-overlapping data points procedure, percent improvement, and Tau-U yielded treatment effects indicating that SFBT may be effective for improving hope and mental health symptoms for Latine clients. Based on these findings, we discuss implications for counselor educators, counselors-in-training, and practitioners, which include integrating SFBT principles into the counselor education curriculum, teaching counselors-in-training how to use SCRDs to evaluate counseling effectiveness, and using the DHS and OQ-45.2 to measure hope and clinical symptoms.

Keywords: solution-focused brief therapy, single-case research design, hope, counselor education, clinical symptoms

Solution-focused brief therapy (SFBT) is a strength-based and evidence-based intervention that helps clients focus on personal strengths, identify exceptions to problems, and highlight small successes (Berg, 1994; Gonzalez Suitt et al., 2016; Schmit et al., 2016). Schmit et al. (2016) conducted a meta-analysis of SFBT for treating symptoms of internalizing disorders and identified that SFBT might be effective in creating short-term changes in clients’ functioning. Other researchers (e.g., Gonzalez Suitt et al., 2016; Novella et al., 2020) also found that SFBT can be helpful with clients from various cultural backgrounds and with different presenting symptoms such as anxiety. Yet, there is scant research evaluating the efficacy of SFBT on subjective well-being with Latine (a gender-neutral term that is more consistent with Spanish language and grammar than Latinx) populations. Additionally, there is not a lot of research that investigates the effectiveness of counseling practices among counselors-in-training (CITs) at community counseling clinics with culturally diverse clients. Although the costs are relatively low, the type of supervision, training, and feedback given to CITs provides community clients with the potential for effective counseling services. However, only a few researchers (e.g., Schuermann et al., 2018) have explored the efficacy of counseling services within a community counseling training clinic. Therefore, empirical research is needed regarding the efficacy of SFBT with Latine populations in a counseling training clinic at a Hispanic Serving Institution.

The Latine population is a fast-growing group in the United States and makes up approximately 19% of the U.S. population (U.S. Census Bureau, 2020). Despite this growth, members of this culturally diverse population continue to face individual, interpersonal, and institutional challenges (Ponciano et al., 2020; Vela, Lu, et al., 2014). Because Latine individuals experience discrimination and negative environments (Cheng & Mallinckrodt, 2015; Ponciano et al., 2020; Ramos et al., 2021), perceive lack of support from counselors and teachers in K–12 school environments (A. G. Cavazos, 2009; Vela-Gude et al., 2009), and experience microaggressions (Sanchez, 2019), they are likely to experience greater mental health challenges. Researchers have identified numerous internalizing and externalizing symptoms that represent Latine individuals’ mental health experiences, likely putting them at greater risk for mental health impairment and poor psychological functioning (Cheng et al., 2016). Researchers also detected that Latine youth had similar or higher prevalence rates of internalizing disorders (e.g., anxiety and depression) when compared with their White counterparts (Merikangas et al., 2010; Ramos et al., 2021). Given that Latine individuals might be at greater risk for psychopathology and their mental health needs are often unaddressed because they do not seek mental health services (Mendoza et al., 2015; Sáenz et al., 2013), further evaluation of the effectiveness of counseling practices for this population is necessary.

Fundamental Principles of Solution-Focused Brief Therapy
     Developed from the clinical practice of Steven de Shazer and Insoo Kim Berg, SFBT is a future-focused and goal-directed approach that focuses on searching for solutions and is created on the belief that clients have knowledge and resources to resolve their problems (Kim, 2008). Counselors’ therapeutic task is to help clients imagine how they would like things to be different and what it will take to facilitate small changes. Counselors take active roles by asking questions to help clients look at the situation from different perspectives and use techniques to identify where the solution occurs (de Shazer, 1991; Proudlock & Wellman, 2011).

In SFBT, counselors amplify positive constructs and solutions by using specific strategies and techniques to build on positive factors (Tambling, 2012). Common techniques include the miracle question, scaling questions, exceptions, experiments, and compliments, which are designed to help clients identify personal strengths and cultivate what works (de Shazer, 1991; Proudlock & Wellman, 2011). We agree with Vela, Lerma, et al. (2014) that counselors can use postmodern and strength-based theories (e.g., SFBT) to develop positive psychology constructs such as hope, positive emotions, and subjective well-being. SFBT might be useful to help Latine clients identify strengths, build on what works, and reconstruct a positive future outcome.

Several researchers have indicated the efficacy of SFBT for treating various issues with different populations (Bavelas et al., 2013; Kim, 2008). Schmit et al. (2016) conducted a meta-analysis with 26 studies examining the effectiveness of SFBT for treating symptoms of depression and anxiety. They found that SFBT resulted in moderately successful treatment; however, adults’ treatment effects were 5 times larger when compared to those of youth and adolescents. One possible explanation was that SFBT may require clients’ maturity to integrate and understand SFBT concepts and techniques. Researchers also concluded that the impact of SFBT may be effective in producing short-term changes that will lead to further gains in symptom relief as well as psychological functioning (Schmit et al., 2016).

Brogan et al. (2020) commented that “there are limited studies that demonstrate the effectiveness of this method with the Latine . . . population” (p. 3). However, we postulate that SFBT principles are compatible with Latine cultural and family characteristics (Lerma et al., 2015; Oliver et al., 2011). There are several reasons that make SFBT an appropriate fit for working with the Latine population. For instance, researchers suggest that understanding family dynamics or familismo when evaluating mental health and overall well-being with the Latine population is important (Ayón et al., 2010). Familismo is strong family ties to immediate and extended families in the Latine culture.

In a study investigating Latine families, Priest and Denton (2012) found that family cohesion and family discord were associated with anxiety. Calzada et al. (2013) also highlighted that although family support can positively impact mental health, family can also become a source of conflict and stress, which might result in poor mental health. By using SFBT principles, counselors can help Latine clients identify how familismo is a source of strength through sense of loyalty and cooperation among family members (Oliver et al., 2011).

Another emphasis with SFBT that aligns with the Latine culture is the focus on personal and family resiliency. Because Latine individuals must navigate individual, interpersonal, and institutional challenges (Vela et al., 2015), they have natural resilience and coping skills that align with an SFBT approach. Counselors can use exceptions, scaling questions, and compliments to help Latine individuals discover their inherent resilience and continue to persevere through personal adversity.

Constructs: Hope and Clinical Symptoms
     Consistent with a dual-factor model of mental health (Suldo & Shaffer, 2008), we focused on two outcomes: hope and clinical symptoms. First, hope, which has been associated with subjective well-being among Latine populations (Vela, Lu, et al., 2014), refers to a pattern of thinking regarding goals (Snyder et al., 2002). Snyder et al. (1991) proposed Hope Theory with pathways thinking and agency thinking. Pathways thinking refers to individuals’ plans to pursue desired objectives (Feldman & Dreher, 2012), while agency thinking refers to perceptions of ability to make progress toward goals (Snyder et al., 1999). Researchers found that hope was positively related to meaning in life, grit, and subjective happiness among Latine populations (e.g., Vela, Lerma, et al., 2014; Vela et al., 2015). Other researchers (e.g., Vela, Ikonomopoulos, et al., 2016) have explored the impact of counseling interventions on hope among Latine adolescents and survivors of intimate partner violence. Given the association between hope and other positive developmental outcomes among Latine populations, examining this construct as an outcome in clinical mental health counseling services is important.

In addition to hope as an indicator of subjective well-being, we used the Outcome Questionnaire (OQ-45.2; Lambert et al., 1996) to measure clinical symptoms in the current study for several reasons, including its strong psychometric properties, its use in the counseling training clinic where this study took place, and its use in other studies that evaluate the efficacy of counseling or psychotherapy and show evidence based on relation to other variables such as depression and clinical symptoms (Ekroll & Rønnestad, 2017; Ikonomopoulos et al., 2017; Soares et al., 2018). The OQ-45.2 measures three areas that are central to individual psychological functioning: Symptom Distress, Interpersonal Relations, and Social Role Performance.

Purpose of Study and Rationale
     The purpose of this study was to evaluate the efficacy of SFBT for increasing hope and decreasing clinical symptoms among Latine clients. We implemented an SCRD (Lenz et al., 2012) to identify and explore changes in hope and clinical symptoms as a result of participation in SFBT. We evaluated the following research question: To what extent is SFBT effective for increasing hope and decreasing clinical symptoms among Latine clients who receive services at a community counseling clinic?

Methodology

We implemented a small-series (N = 2) AB SCRD with Latine clients admitted into treatment at an outpatient community counseling clinic to evaluate the treatment effect associated with SFBT for increasing hope and reducing clinical symptoms. The rationale for using an SCRD was to explore the impact of an intervention that might help Latine clients at a community counseling training clinic. We used criterion sampling to recruit participants who (a) sought counseling services at a community counseling clinic, (b) had internalizing symptoms related to anxiety and depression, and (c) worked with a CIT who was supervised by faculty in a clinical mental health counseling program.

Participants
     Participants in this study were two adults admitted into treatment at an outpatient community counseling clinic in the Southern region of the United States. Both participants identified as Hispanic; one identified as a female and the other identified as a male. During informed consent, we explained to participants that they would be assigned pseudonyms to protect their identity. The participants consented to both treatment and inclusion in the research study.

The two participants for this study were selected to participate in this study because of their presenting internalizing symptoms (e.g., depression, anxiety) and fit for SFBT principles. Because we wanted to increase hope among these Latine clients, we felt that SFBT was an appropriate approach. The fundamental principles of SFBT align with attempting to facilitate hope among clients with various symptoms because it helps clients view mental health challenges as opportunities to cultivate strengths, explore solutions, and identify new skills (Bannik, 2008; Joubert & Guse, 2021). SFBT practitioners also posit that clients can recreate their future, cultivate resilience, and construct solutions, which aligns well with tenets of the Latine culture (J. Cavazos et al., 2010). In the first session prior to treatment, both clients indicated that they believed they were in control of their future mental health and that they could construct solutions. We also informed them that SFBT focuses on future solutions as opposed to focusing on problems and the past. Because these clients indicated a willingness to explore their future through co-constructing solutions, they were a good fit for SFBT principles in counseling.

Participant 1
     “Mary” was a 31-year-old Latine female with a history of receiving student mental health services at a university counseling clinic. Mary sought individual counseling services because of a recent separation with the father of her three children who was emotionally abusive. Anxiety associated with this separation was compounded by traumatic experiences from 5 years prior. Mary stated that her Latine culture generated greater symptoms of anxiety while recognizing her new role as a single mother. Mary’s therapeutic goals and focus of treatment were to reduce clinical symptoms of anxiety as well as improve self-identity and self-esteem.

Participant 2
     “Joel” was a 20-year-old Latine male with a history of receiving mental health services for symptoms of depression. Joel’s therapeutic goals and focus of treatment were to reduce clinical symptoms of anxiety and associated anger as well as improve self-esteem. Joel reported being a victim of domestic violence and child abuse. Additionally, Joel expressed distress with revealing his sexual identity because of patriarchal roles in the Latine culture that may result in rejection.

Measurements
Outcome Questionnaire (OQ-45.2)
     The OQ-45.2 is a 45-item self-report outcome questionnaire (Lambert et al., 1996) for adults 18 years of age and older. Each item is associated with a 5-point Likert scale with responses ranging from never (1) to almost always (5). We used the total score for the OQ-45, which was calculated by summing the three subscale scores with a possible total score ranging from 0–180. Higher scores are reflective of more severe distress and impairment. Sample response items include “I feel worthless” and “I have trouble getting along with friends and close acquaintances.” This assessment was designed to include items relevant to three domains central to mental health: Symptom Distress, Interpersonal Relations, and Social Role Performance (Lambert et al., 1996).

Researchers have examined structural validity and reliability. Coco et al. (2008) used a confirmatory factor analysis to test various models of the factorial structure. They found support for the four-factor, bi-level model, which means that each survey item relates to a subscale as well as an overall maladjustment score. Amble et al. (2014) also examined psychometric properties using confirmatory factor analysis, concluding that “the total score of the OQ-45 is a reliable and valid measure for assessing therapy progress” (p. 511). Their findings are like Boswell et al.’s (2013) findings that found support for the validity of the total OQ-45 score. There is also evidence based on relation to other clinical outcomes measured by the General Severity Index from the Symptom Checklist 90-Revised, the Beck Depression Inventory, and Social Adjustment Scale (Lambert et al., 1996). Additionally, previous psychometric evaluations have revealed evidence of reliability through reliability indices such as Cronbach’s alpha (Ikonomopoulos et al., 2017; Kadera et al., 1996; Umphress et al., 1997). Internal consistency estimates through Cronbach’s alpha range from .71 to .92 (Ikonomopoulos et al., 2017; Lambert et al., 1996).

Hope
     The Dispositional Hope Scale (DHS; Snyder et al., 1991) is a self-report inventory to measure participants’ attitudes toward goals and objectives. Participants responded to eight statements evaluated on an 8-point Likert scale ranging from definitely false (1) to definitely true (8). We used the total Hope score, which was obtained by summing scores for both Agency and Pathways subscales. Total scores range from 8–64, with higher scores indicating greater levels of hope. Sample response items include “I can think of many ways to get the things in life that are important to me” and “I can think of many ways to get out of a jam.”

Researchers have examined structural validity and reliability. Galiana et al. (2015) used confirmatory factor analysis to identify that a one-factor structure was the best fit. There is also evidence of validity with other theoretically relevant constructs such as meaning in life (Vela et al., 2017) as well as evidence of concurrent and discriminant validity with other measures related to self-esteem, state hope, and state positive and negative affect (Snyder et al., 1996). There is also evidence of factorial invariance (Nel & Boshoff, 2014), suggesting that factor structure is similar across gender and racial ethnic groups. Additionally, there is evidence of reliability (e.g., internal consistency) as indicated through Cronbach’s alpha coefficients ranging from .85 to .86 (Snyder et al., 2002; Vela et al., 2015).

Study Setting
     During the present study, each participant was involved in individual counseling at a community counseling clinic. The facility, located in the Southern region of the United States, provides free counseling services to community members. Individual and group sessions are free and last approximately 45 to 50 minutes. The community counseling clinic offers preventive and early treatment for developmental, emotional, and interpersonal difficulties for community members. CITs at the community counseling clinic are graduate counseling students enrolled in practicum or internship.

Interventionists
     Krystle Himmelberger, who was the CIT in the current study, adapted strength-based interventions designed to facilitate positive feelings by helping clients set goals, focus on the future, and find solutions rather than problems. She was a CIT in a clinical mental health counseling program. Prior to the study, she selected and designed interventions and activities according to specific guidelines from SFBT manuals and sources (Buchholz Holland, 2013; de Shazer et al., 2007; Trepper et al., 2010).

James Ikonomopoulos and Javier Cavazos Vela were faculty counseling supervisors who monitored sessions and provided weekly supervision to maintain fidelity of SFBT interventions. Bavelas et al. (2013) suggested that live supervision may provide a second set of clinical eyes to help CITs. Himmelberger received weekly supervision to ensure procedural and treatment adherence (Liu et al., 2020). Furthermore, videotaped supervision and transcriptions provided her with the ability to communicate between sessions. These measures were used to enhance treatment fidelity by focusing on quality and competency.

SFBT Principles and Intervention
     Participants received six to nine sessions of individual SFBT using the description of techniques and activities in the following resources: More Than Miracles: The State of the Art of Solution-Focused Brief Therapy (de Shazer et al., 2007), Solution-Focused Therapy Treatment Manual for Working With Individuals (Trepper et al., 2010), and “The Lifeline Activity With a ‘Solution-Focused Twist’” (Buchholz Holland, 2013). We used the following SFBT principles to guide the intervention: focus on specific topics, a positive and co-constructed therapeutic relationship, and questioning techniques (Trepper et al., 2010). First, Himmelberger focused on specific topics such as preferred future, strengths, confidence in finding solutions, and exceptions. She used future-specific and solution-focused language in each session to help clients focus on their preferred futures. Second, she developed a positive therapeutic relationship with clients through shared trust and co-construction of counseling experiences. She was positive and helpful, and she helped instill optimism and hope in her clients. A positive therapeutic relationship was evidenced based on her report as well as live supervision and reviews of session recordings. Finally, Himmelberger used questioning techniques that focused on clients’ strengths, exceptions, and coping skills. She used questioning techniques that helped clients focus on progress toward their preferred future and future-oriented solutions.

The techniques she used included looking for previous solutions, exceptions, the miracle question, scaling questions, compliments, future-oriented questions, and “so what is better” questions. Himmelberger used looking for previous solutions to help clients identify their previous coping strategies to cope with the problem. Based on Himmelberger’s report in supervision sessions, both clients commented that they were surprised that they had been successful in the past when the problem did not exist. She also used exceptions to help clients identify what was different when the problem did not exist. Additionally, she used present- and future-oriented questions to help clients focus on future solutions. This was an important technique as clients were not used to ignoring the problem. When clients provided updates on their progress toward their goals, Himmelberger used compliments to validate what clients were doing well. Using compliments helped cultivate a positive therapeutic relationship with these clients.

Finally, with the miracle question, she asked clients to provide details about their preferred future and what that would look like. She followed up with a question about constructing solutions regarding what work it would take to make that preferred future happen. Then in each session, she conducted progress checks toward that preferred future by asking scaling questions (On a scale from 1–10, where are you now with progress toward your preferred future?) and questions about “what is better” (What is better now when compared to last week?). She complimented clients’ progress toward that preferred future.

Procedures
     We used AB SCRD to determine the effectiveness of an SFBT treatment program (Lundervold & Belwood, 2000; Sharpley, 2007) using scores on the DHS and OQ-45.2 total scale as outcome measures (Lambert et al., 1996). The two participants who were assigned to Himmelberger did not begin counseling until they consented to treatment and the research study. In other words, they did not receive counseling services prior to participation in this study. After 4 weeks of data collection, the baseline phase of data collection was completed. Participants did not receive counseling services during the baseline period.

The treatment phase began after the fourth baseline measure. At the conclusion of each individual session, participants completed the DHS and OQ-45.2. Himmelberger collected and stored the measures in each participant’s folder in a locked cabinet in the clinic. After the 12th week of data collection, the treatment phase of data collection was completed, at which point the SFBT intervention was withdrawn.

A percentage of non-overlapping data (PND) procedure was used to analyze quantitative data (Scruggs et al., 1987). A visual representation of change over time is graphically represented with a split-middle line of progress visual trend analysis showing data points from each phase (Lenz, 2015). Statistical process control charting was used to determine whether the characteristics of treatment phase data were beyond the realm of random occurrence with 99% confidence (Lenz, 2015). An interpretation of effect size was estimated using Tau-U to complement PND analysis (Lenz, 2015; Sharpley, 2007).

Data Collection and Analysis
     We implemented the PND (Scruggs et al., 1987) to analyze scores on the Hope and OQ-45.2 scales across phases of treatment. The PND procedure yields a proportion of data in the treatment phase that overlaps with the most conservative data point in the baseline phase. PND calculations are expressed in a decimal format that ranges between 0 and 1, with higher scores representing greater treatment effects (Lenz, 2013).

Upon considering the percentage of data exceeding the median procedure (Ma, 2006), we selected the PND because it is considered a robust method of calculating treatment effectiveness (Lenz, 2013). This metric is conceptualized as the percentage of treatment phase data that exceeds a single noteworthy point within the baseline phase. Because we aimed for an increase in DHS scores, the highest data point in the baseline phase was used. Finally, given that we aimed for a decrease in OQ-45.2 total scale scores, the lowest data point in the baseline was used (Lenz, 2013). To calculate the PND statistic, data points in the treatment phase on the therapeutic side of the baseline are counted and then divided by the total number of points in the treatment phase (Ikonomopoulos et al., 2016).

Estimates of Effect Size and Clinical Significance
     PND values are typically interpreted using the estimation of treatment effect provided by Scruggs and Mastropieri (1998) wherein values of .90 and greater are indicative of very effective treatments, those ranging from .70 to .89 represent moderate effectiveness, those between .50 to .69 are debatably effective, and scores less than .50 are regarded as not effective (Ikonomopoulos et al., 2015, 2016). Tau-U values are typically interpreted using the estimation of treatment effect provided by Vannest and Ninci (2015) wherein Tau-U magnitudes can be interpreted as small (≤ .20), moderate (.20–.60), large (.60–.80), and very large (≥ .80). These procedures were completed for each participant’s scores on the Hope and OQ-45.2 scales.

Clinical significance was determined in accordance with Lenz’s (2020a, 2020b) calculations of percent improvement (PI) values. Percent improvement values greater than 50% were interpreted as representing clinically significant improvement with large effect sizes, 25% to 49% were interpreted as slightly improved without clinical significance, and less than 25% represented no clinical significance. Lenz (2021) also recommended for researchers to provide sufficient context and visual representation when interpreting and reporting clinical significance. As one example, without context and visual representation, researchers could interpret a PI value of 49% as not having clinical significance.

Results

A detailed description of participants’ experiences is provided below. Figure 1 depicts estimates of treatment effect on the DHS; Figure 2 depicts estimates of treatment effect on the OQ-45.2 total scale.

Figure 1
Ratings for Hope by Participants With Split-Middle Line of Progress


Note. PND = Percentage of Non-overlapping Data.

Figure 2
Ratings for Mental Health Symptoms on OQ-45.2 by Participants with Statistical Process Control Charting

Note. PND = Percentage of Non-overlapping Data.

Participant 1
     Data for Mary is represented in Figures 1 and 2 as well as Tables 1 and 2. A comparison of level of Hope across baseline (M = 56.00) and intervention phases (M = 63.50) indicated notable changes in participant scores evidenced by an increase in mean DHS scores over time. Variation between scores in baseline (SD = 3.50) and intervention (SD = 0.83) indicated differential range in scores before and after the intervention. Data in the baseline phase trended downward toward a contra-therapeutic effect over time. Dissimilarly, data in the intervention phase trended upward toward a therapeutic effect over time. Comparison of baseline level and trend data with the first three observations in the intervention phase did suggest immediacy of treatment response for the participant. Data in the intervention phase moved into the desired range of effect for scores representing Hope. Overall, visual inspection of Mary’s ratings on the DHS (see Figure 1) indicates that most of her scores in the treatment phase were higher than her scores in the baseline phase.

Mary’s ratings on the DHS illustrate that the treatment effect of SFBT was moderately effective for improving her DHS score. Evaluation of the PND statistic for the DHS score measure (0.83) indicated that five out of six scores were on the therapeutic side above the baseline (DHS score of 63). Mary successfully improved Hope during treatment as evidenced by improved scores on items such as “I can think of many ways to get out of a jam,” “I can think of ways to get the things in life that are important to me,” and “I meet the goals that I set for myself.” Scores above the PND line were within a 1-point range. Trend analysis depicted a consistent level of improvement following the first treatment measure. This finding is corroborated by the associated Tau-U value (τU = 0.92), which suggested a very large degree of change in which the null hypothesis about intervention efficacy for Mary could be rejected (p = .02). Also, interpretation of the clinical significance estimate of PI is that 13.39% improvement is not clinically significant (Lenz, 2020a, 2020b). See Table 1 for information regarding PND, Tau-U, and PI. Although the PI value is considered not clinically significant, it is important to contextualize this finding within visual inspection of Mary’s Hope scores in Figure 1. Because Mary had moderately high levels of Hope in the baseline phase, her room for improvement based on the ceiling effect as related to Hope was not high. In other words, in the context of Mary’s treatment and visual inspection of her scores, the SFBT intervention helped Mary move from good to better. In the context of Mary’s treatment and a visual representation of her scores on the DHS (see Figure 1), the SFBT intervention had some level of convincingness, which means that some amount of change in Hope occurred for Mary (Kendall et al., 1999; Lenz, 2021).

Table 1
Ratings for Hope by Participants

Age Ethnicity Gender Baseline Data Intervention Data PND τU (p)  

PI

M SD M SD
Mary 31 Latina Female 56.00 3.50 63.50 0.83 83% 0.92 (.02) 13.39%
Joel 20 Latino Male 27.75 2.87 34.90 5.26 60% 0.70 (.05) 25.75%

Note. PND = Percentage of Non-overlapping Data.

Table 2
Ratings for OQ-45.2 Total Scale Score by Participants

Age Ethnicity Gender Baseline Data Intervention Data PND τU (p)  

PI

M SD M SD
Mary 31 Latina Female 45.25 16.25 17.67 7.44 100% −1.00 (.01) 60.95%
Joel 20 Latino Male 84.00 6.00 47.10 10.74  100% −1.00 (.004) 43.93%

Note. PND = Percentage of Non-overlapping Data.

Before treatment began, one of Mary’s baseline measurements was above the cut-score guideline on the OQ-45.2 of a total scale score of 63, which indicates symptoms of clinical significance. Comparison of level of clinical symptoms across baseline (M = 45.25) and intervention phases (M = 17.67) indicated notable changes in participant scores evidenced by a decrease in mental health symptom scale scores over time. Variation between scores in baseline (SD = 16.25) and intervention (SD = 7.44) indicated differential range in scores before and after the intervention. Data in the baseline phase trended upward toward a contra-therapeutic effect over time. Dissimilarly, data in the intervention phase trended downward toward a therapeutic effect over time. Comparison of baseline level and trend data with the first three observations in the intervention phase did suggest immediacy of treatment response for the participant. Data in the intervention phase moved into the desired range of effect for scores representing mental health symptoms.

Mary’s ratings on the OQ-45.2 illustrate that the treatment effect of SFBT was very effective for decreasing her total scale score measuring mental health symptoms. Evaluation of the PND statistic for the OQ-45.2 total scale score measure (1.00) indicated that all six scores were on the therapeutic side below the baseline (total scale score of 26). Mary successfully reduced clinical symptoms during treatment as evidenced by improved scores on items such as “I am a happy person,” “I feel loved and wanted,” and “I find my work/school satisfying.” This contention became most apparent after the first treatment session when Mary continuously scored lower on a majority of symptom dimensions such as Symptom Distress, Interpersonal Relations, and Social Role Performance. Scores below the PND line were within a 24-point range. Trend analysis depicted a consistent level of improvement following the first treatment measure. This finding is corroborated by the associated Tau-U value (τU = −1.0), which suggested a very large degree of change in which the null hypothesis about intervention efficacy for Mary could be rejected (p = .01). An analysis of statistical process control charting revealed that one data point in the treatment phase was beyond the realm of random occurrence with 99% confidence. This finding also corresponds with interpretation of the clinical significance estimate of PI that 60.95% improvement is clinically significant (Lenz, 2020a, 2020b). See Table 2 for information regarding PND, Tau-U, and PI. In the context of Mary’s treatment and a visual representation of Mary’s scores on the OQ-45.2 (see Figure 2), the SFBT intervention had a high level of convincingness, which means that a considerable amount of change in clinical symptoms occurred for Mary (Kendall et al., 1999; Lenz, 2021).

Participant 2
     Data for Joel is represented in Figures 1 and 2 as well as Tables 1 and 2. Comparison of level of Hope across baseline (M = 27.75) and intervention phases (M = 34.90) indicated notable changes in participant scores evidenced by an increase in mean DHS scores over time. Variation between scores in baseline (SD = 2.87) and intervention (SD = 5.26) indicated differential range in scores before and after the intervention. Data in the baseline phase trended downward toward a contra-therapeutic effect over time. Dissimilarly, data in the intervention phase trended upward toward a therapeutic effect over time. Comparison of baseline level and trend data with the first three observations in the intervention phase did suggest immediacy of treatment response for the participant. Data in the intervention phase moved into the desired range of effect for scores representing Hope.

Joel’s ratings on the DHS illustrate that the treatment effect of SFBT was debatably effective for improving his DHS score. Evaluation of the PND statistic for the DHS score measure (0.60) revealed that six out of ten scores were on the therapeutic side above the baseline (DHS score of 31). Joel successfully improved his Hope during treatment as evidenced by improved scores on items such as “I can think of many ways to get out of a jam,” “I can think of ways to get the things in life that are important to me,” and “I meet the goals that I set for myself.” Scores above the PND line were within an 18-point range. Trend analysis depicted a steady level of scores following the first treatment measure, with scores vacillating around the baseline score until the eighth treatment measure. This finding is corroborated by the associated Tau-U value (τU = 0.70), which suggested a large degree of change in which the null hypothesis about intervention efficacy for Joel could be rejected (p = .047). This finding also corresponds with interpretation of the clinical significance estimate of PI that 25.75% is slightly improved but not clinically significant (Lenz, 2020a, 2020b). One explanation for the lack of clinical significance and moderate effect size is the limited nature of the intervention. Based on results from visual depiction of Joel’s levels of Hope across treatment (see Figure 1), we suspect that this trend would have continued if he had received additional sessions of an SFBT intervention. His treatment was trending in a positive trajectory. In the context of Joel’s treatment and a visual representation of his scores on the DHS (see Figure 1), the SFBT intervention had a moderate level of convincingness, which means that a considerable amount of change in Hope occurred for Joel (Kendall et al., 1999; Lenz, 2021).

Before treatment began, all four of Joel’s baseline measurements were above the cut-score guideline on the OQ-45.2 of a total scale score of 63, which indicates symptoms of clinical significance. Comparison of level of clinical symptoms across baseline (M = 84.00) and intervention phases (M = 47.10) indicated notable changes in participant scores evidenced by a decrease in mental health symptom scale scores over time. Variation between scores in baseline (SD = 6.00) and intervention (SD = 10.74) indicated differential range in scores before and after intervention. Data in the baseline phase trended upward toward a contra-therapeutic effect over time. Dissimilarly, data in the intervention phase trended downward toward a therapeutic effect over time. Comparison of baseline level and trend data with the first three observations in the intervention phase did suggest immediacy of treatment response for the participant. Data in the intervention phase moved into the desired range of effect for scores representing mental health symptoms.

Joel’s ratings on the OQ-45.2 illustrate that the treatment effect of SFBT was very effective for decreasing his total scale score measuring clinical symptoms. Evaluation of the PND statistic for the total scale score measure (1.00) indicated that all 10 scores were on the therapeutic side below the baseline (total scale score of 77). Joel successfully reduced clinical symptoms during treatment as evidenced by improved scores on items such as “I am a happy person,” “I feel loved and wanted,” and “I find my work/school satisfying.” This contention became most apparent after the first treatment session when Joel continuously scored lower on a majority of symptom dimensions such as Symptom Distress, Interpersonal Relations, and Social Role Performance. Scores below the PND line were within a 41-point range. Trend analysis depicted a consistent level of improvement following the first treatment measure. This finding is corroborated by the associated Tau-U value (τU = −1.0), which suggested a very large degree of change in which the null hypothesis about intervention efficacy for Joel can be rejected (p = .004). An analysis of statistical process control charting revealed that eight data points in the treatment phase were beyond the realm of random occurrence with 99% confidence. This finding also corresponds with interpretation of the clinical significance estimate that 43.93% of improvement is slightly improved but not clinically significant (Lenz, 2020a, 2020b). Considering contextual evidence from the intervention as well as data visualization of Figure 2, it was clear that Joel experienced a downward trajectory in clinical symptoms. If he had received additional SFBT sessions, we suspect that he would have continued to experience a reduction in clinical symptoms. In the context of Joel’s treatment and a visual representation of his scores on the OQ-45.2 (see Figure 2), the SFBT intervention had a high level of convincingness, which means that a considerable amount of change in Hope occurred for Joel (Kendall et al., 1999; Lenz, 2021).

Discussion

The purpose of this exploratory study was to examine the impact of SFBT on clinical symptoms and hope among Latine clients. The results yield promising findings and preliminary evidence about the efficacy of SFBT as an intervention for promoting positive change across two Latine clients’ clinical symptoms and hope. The scores varied for each outcome variable, and this is likely related to the length and duration of the intervention as well as each participant’s personal characteristics (Callender et al., 2021) and relationship to their counselor (Liu et al., 2020). Findings from the current study also lend further support regarding the efficacy among CITs who aim to impact clients’ psychological functioning at a community counseling training clinic.

The findings for clinical symptoms showed a trend toward reduction in clinical symptoms across 8 weeks of SFBT. Both participants reported statistically significant improvements (p < .05) in reductions of clinical symptoms on the OQ-45.2. In both cases, the SFBT intervention was within the range of very large treatment effectiveness and clinical significance for improving symptoms of psychopathology. Results from the PND and PI confirmed that these participants experienced reduced clinical symptoms. It appears that there was a steady progression of improvement for these participants after their second treatment session. During this phase of treatment, Himmelberger used techniques such as exceptions to the problem and scaling questions to help participants recognize inner resources and personal strengths, analyze current levels of functioning, and visualize their preferred future (de Shazer, 1991).

In review of counseling session recordings and in supervision, Himmelberger commented that both Joel and Mary provided feedback throughout SFBT that they appreciated the opportunity to focus on small successes, personal strengths, and exceptions to their problems, and the use of scaling questions to assess and track their progress. They also commented that they appreciated how they were able to conceptualize family as a source of strength and element of resiliency (J. Cavazos et al., 2010; Oliver et al., 2011). Researchers have found that using SFBT techniques such as miracle and exceptions questions can help clients reduce negative affect (Brogan et al., 2020; Neipp et al., 2021). Our findings also are like those of Schmit et al. (2016), who found that SFBT may be effective for treating symptoms of internalizing disorders, and Oliver et al. (2011), who commented that SFBT can help Mexican Americans cultivate familismo.

The findings for Hope showed a visual trend toward increased levels of Hope across 8 weeks of SFBT. Both participants reported statistically significant improvements (p < .05) in Hope on the DHS. In both cases, the SFBT intervention was within the range of debatable effectiveness and slight improvement without clinical significance for improving symptoms of Hope. Mary’s rating on the DHS indicates the treatment was moderately effective and PI was not clinically significant. When visualizing Mary’s rating on the DHS, we see that Mary had high levels of Hope in the baseline phase, which means that she did not have much room to improve in the treatment phase. Contextualizing Mary’s treatment and using a visual representation of her scores on the DHS (see Figure 1), we infer that the SFBT intervention had some level of convincingness, which means that some amount of change in Hope occurred for Mary (Kendall, 1999; Lenz, 2021). Additionally, Joel’s rating on the DHS indicate that the treatment effect was debatably effective with a PI that is slightly improved but not clinically significant. When looking closely at Joel’s scores, we see that Joel experienced trends in a positive trajectory. In the context of his treatment and a visual representation of his scores on the DHS (see Figure 1), the SFBT intervention had a moderate level of convincingness, which means that a considerable amount of change in Hope occurred (Kendall, 1999; Lenz, 2021).

Suldo and Shaffer (2008) argued that using a dual-factor model of mental health with indicators of subjective well-being (e.g., hope) and illness (e.g., clinical symptoms) allows researchers and practitioners to measure and understand complete mental health. Although a client’s psychopathology might decrease, subjective well-being might not improve with the same effect. Findings from SFBT treatment with Joel and Mary support a dual-factor model that suggests indicators of personal wellness and psychopathology are different parts of mental health and are important to consider in treatment (Vela, Lu, et al., 2016). For Joel and Mary, SFBT appeared to be efficacious for slightly increasing and maintaining scores on the DHS. Our findings support Joubert and Guse (2021), who recommended SFBT to facilitate hope and subjective well-being among clients. When clients can think about solutions, identify exceptions to their problems, and think about their preferred future, they might be more likely to develop hope for their future as well as improve subjective well-being (Joubert & Guse, 2021).

The findings from this study lend further support regarding the effectiveness of counseling services at a community counseling training clinic. Our findings are like Schuermann et al.’s (2018) findings that lend support for the efficacy of counseling services in a Hispanic-serving counselor training clinic and Dorais et al.’s (2020) findings of counseling students’ motivational interviewing techniques at a university addiction training clinic. Faculty supervision, group supervision, and live supervision have all been associated with increases in counseling interns’ self-efficacy to provide quality counseling services. Himmelberger received weekly supervision and consultation on SFBT principles as well as SCRD principles. It is possible that these forms of supervision helped her provide effective counseling services. Our findings also support the need to continue to design research studies to evaluate the impact of counseling services at community counseling training clinics with clients of different cultural backgrounds and different presenting symptoms.

Implications for Counselor Educators and Counselors-in-Training
     Based on our findings, we propose a few recommendations for counselor educators, CITs, and practitioners. First, our study provides evidence that CITs at community counseling centers can provide effective treatments with culturally diverse clients with moderate internalized symptoms such as depression and anxiety. As a result, SFBT can be taught and infused into counselor education curricula and can be delivered by future licensed professional counselors, school counselors, or counseling interns.

Community agencies working with this client population should also consider providing counselors with professional development and training related to SFBT. It is important to mention that when two of us were in graduate programs, we did not receive formal SFBT instruction. This might be due to greater emphasis on humanistic and cognitive behavioral therapies in counseling curricula or among some counselor education faculty. As a result, counselor educators must make a cogent effort to promote and discuss postmodern theories such as SFBT. This is important because SFBT can be effective at improving internalizing disorders among clients (Schmit et al., 2016) and Latine populations (Gonzalez & Franklin, 2016).

Another implication for counselor educators is to consider teaching CITs how to use SCRDs to monitor and assess treatment effectiveness. All counseling interns who work in a community counseling clinic need to demonstrate the effectiveness of their services with clients. Therefore, CITs can learn how to use SCRDs or a single-group pretest/posttest with clinical significance (Ikonomopoulos et al., 2021; Lenz, 2020b) to determine the impact of counseling on client outcomes. Finally, community counseling clinics can consider using the DHS and OQ-45.2 to measure indicators of subjective well-being and clinical symptoms. CITs can use these instruments, which have evidence of reliability and validity with culturally diverse populations, to document the impact of their counseling services on clients’ hope and clinical symptoms.

Implications for Practitioners
     There also are implications for practitioners. First, counselors can use SFBT principles and techniques to work with Latine clients. By using a positive and future-oriented framework, counselors can build a positive therapeutic relationship and help Latine clients construct a positive future. Counselors can use SFBT to help Latine clients identify how familismo is a source of strength (Oliver et al., 2011) and draw on their inner resiliency (Vela et al., 2015) to create their preferred future outcome. Practitioners can use SFBT techniques, including looking for previous solutions, exceptions, the miracle question, scaling questions, compliments, and future-oriented questions. SFBT principles and techniques can be used to facilitate hope by helping Latine clients view mental health challenges as opportunities to cultivate strengths and explore solutions (Bannik, 2008; Joubert & Guse, 2021).

Practitioners also can use SCRDs to evaluate the impact of their work with clients. Although most practitioners collect pre- and post-counseling intervention data, they typically use a single data point at pre-counseling and a single data point at post-counseling. Using an SCRD in which a baseline phase and weekly treatment points are collected can help analyze trends over time and identify clinical significance. Lenz (2015) described how practitioners can use SCRDs to make inferences—self as control, flexibility and responsiveness, small sample size, ease of data analysis, and type of data yielded from analyses. In other words, counseling practitioners can analyze data over time with a client and use the data collection and analysis methods in this study to evaluate the impact of their counseling services on client outcomes.

Implications and Limitations
     There are several implications for future research. First, researchers can evaluate the impact of SFBT on other indicators related to subjective well-being and clinical symptoms among culturally diverse populations, including subjective happiness, resilience, grit, meaning in life, anxiety, and depression (Karaman et al., 2019). More research needs to explore how SFBT might enhance indicators of subjective well-being and decrease clinical symptoms as well as the intersection between recovery and psychopathology. Although researchers have explored the impact of SFBT on internalizing symptoms (Schmit et al., 2016), more research needs to examine the impact on subjective well-being, particularly among Latine populations and Latine adolescents at a community counseling clinic.

Researchers also should consider using qualitative methods to discover which SFBT techniques are most effective. In-depth interviews and focus groups with SFBT participants would provide insight and perspectives with the miracle question, scaling questions, and other SFBT techniques. Counselors could also collect clients’ journal entries to capture the impact of specific techniques on psychopathology or subjective well-being.

Additionally, using between-group designs to compare SFBT interventions with other evidence-based approaches such as cognitive behavior therapy could provide fruitful investigations. It is also possible to explore the impact of SFBT coupled with another approach such as positive psychology or cognitive behavior therapy with Latine populations. Finally, researchers can continue to explore the impact of CITs who work with clients in a community counseling clinic. Counseling interns can use SCRDs or single-group pretest/posttest designs to measure the impact of their counseling services.

The current study was exploratory in nature. Although both participants demonstrated improvement in measures related to subjective well-being and psychopathology, generalization to a larger Latine population is not appropriate. Because of the exploratory nature of this study, we cannot generate causal inferences regarding the relationship between SFBT and Hope as well as clinical symptoms. Second, we did not include withdrawal measures following completion of the treatment phase (Ikonomopoulos et al., 2016, 2017). Although some researchers use AB and ABA SCRDs to measure counseling effectiveness (Callender et al., 2021), we did not use an ABA design that would have provided stronger internal validity to evaluate changes of SFBT (Lenz et al., 2012). Because Himmelberger completed the academic semester and graduated from the clinical mental health counseling program, collecting withdrawal measures was not possible. Therefore, an AB SCRD was a more feasible approach.

Conclusion
     To the best of our knowledge, this is one of the first exploratory studies to examine the impact of the effectiveness of SFBT with Latine clients at a community counseling training clinic. This exploratory SCRD serves as a foundation for future data collection and evaluation of CITs who work with culturally diverse clients at community counseling training clinics. Results support the potential of SFBT as an intervention for promoting positive change for Latine clients’ hope and clinical symptoms.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

References

Amble, I., Gude, T., Stubdal, S., Oktedalen, T., Skjorten, A. M., Andersen, B. J., Solbakken, O. A., Brorson, H. H., Arnevik, E., Lambert, M. J., & Wampold, B. E. (2014). Psychometric properties of the Outcome Questionnaire-45.2: The Norwegian version in an international context. Psychotherapy Research, 24(4), 504–513. https://doi.10.1080/10503307.2013.849016

Ayón, C., Marsiglia, F. F., & Bermudez-Parsai, M. (2010). Latino family mental health: Exploring the role of discrimination and familismo. Journal of Community Psychology, 38(6), 742–756.
https://doi.org/10.1002/jcop.20392

Bannik, F. P. (2008). Posttraumatic success: Solution-focused brief therapy. Brief Treatment and Crisis Intervention, 8(3), 215–225. https://doi.org/10.1093/brief-treatment/mhn013

Bavelas, J., De Jong, P., Franklin, C., Froerer, A., Gingerich, W., Kim, J., Korman, H., Langer, S., Lee, M. Y., McCollum, E. E., Jordan, S. S., & Trepper, T. S. (2013). Solution focused therapy treatment manual for working with individuals. Solution Focused Brief Therapy Association, 1–42. https://bit.ly/solutionfocusedtherapy

Berg, I. K. (1994). Family-based services: A solution-focused approach. W. W. Norton.

Boswell, D. L., White, J. K., Sims, W. D., Harrist, R. S., & Romans, J. S. C. (2013). Reliability and validity of the Outcome Questionnaire–45.2. Psychological Reports, 112(3), 689–693.
https://doi.org/10.2466/02.08.PR0.112.3.689-693

Brogan, J., Contreras Bloomdahl, S., Rowlett, W. H., & Dunham, M. (2020). Using SFBC group techniques to increase Latino academic self-esteem. Journal of School Counseling, 18. https://files.eric.ed.gov/fulltext/EJ1251785.pdf

Buchholz Holland, C. E. (2013). The lifeline activity with a “solution-focused twist.” Journal of Family Psychotherapy, 24(4), 306–311. https://doi.org/10.1080/08975353.2013.849552

Callender, K. A., Trustey, C. E., Alton, L., & Hao, Y. (2021). Single case evaluation of a mindfulness-based mobile application with a substance abuse counselor. Counseling Outcome Research and Evaluation, 12(1), 16–29. https://doi.org/10.1080/21501378.2019.1686353

Calzada, E. J., Tamis-LeMonda, C. S., & Yoshikawa, H. (2013). Familismo in Mexican and Dominican families from low-income, urban communities. Journal of Family Issues, 34(12), 1696–1724.
https://doi.org/10.1177/0192513×12460218

Cavazos, A. G. (2009). Reflections of a Latina student-teacher: Refusing low expectations for Latina/o students. American Secondary Education, 37(3), 70–79.

Cavazos, J., Jr., Johnson, M. B., Fielding, C., Cavazos, A. G., Castro, V., & Vela, L. (2010). A qualitative study of resilient Latina/o college students. Journal of Latinos and Education, 9(3), 172–188.
https://doi.org/10.1080/153484431003761166

Cheng, H.-L., Hitter, T. L., Adams, E. M., & Williams, C. (2016). Minority stress and depressive symptoms: Familism, ethnic identity, and gender as moderators. The Counseling Psychologist, 44(6), 841–870.
https://doi.org/10.1177/0011000016660377

Cheng, H.-L., & Mallinckrodt, B. (2015). Racial/ethnic discrimination, posttraumatic stress symptoms, and alcohol problems in a longitudinal study of Hispanic/Latino college students. Journal of Counseling Psychology, 62(1), 38–49. https://doi.org/10.1037/cou0000052

Coco, G. L., Chiappelli, M., Bensi, L., Gullo, S., Prestano, C., & Lambert, M. J. (2008). The factorial structure of the Outcome Questionnaire-45: A study with an Italian sample. Clinical Psychology and Psychotherapy, 15(6), 418–423. https://doi.org/10.1002/cpp.601

de Shazer, S. (1991). Putting differences to work. W. W. Norton.

de Shazer, S., Dolan, Y., Korman, H., Trepper, T., McCollum, E., & Berg, I. K. (2007). More than miracles: The state of the art of solution-focused brief therapy. Hawthorne Press.

Dorais, S., Gutierrez, D., & Gressard, C. R. (2020). An evaluation of motivational interviewing based treatment in a university addiction counseling training clinic. Counseling Outcome Research and Evaluation, 11(1), 19–30. https://doi.org/10.1080/21501378.2019.1704175

Ekroll, V. B., & Rønnestad, M. H. (2017). Pathways towards different long-term outcomes after naturalistic psychotherapy. Clinical Psychology & Psychotherapy, 25(2), 292–301. https://doi.org/10.1002/cpp.2162

Feldman, D. B., & Dreher, D. E. (2012). Can hope be changed in 90 minutes? Testing the efficacy of a single-session goal-pursuit intervention for college students. Journal of Happiness Studies, 13, 745–759.
https://doi.org/10.1007/s10902-011-9292-4

Galiana, L., Oliver, A., Sancho, P., & Tomás, J. M. (2015). Dimensionality and validation of the Dispositional Hope Scale in a Spanish sample. Social Indicators Research, 120, 297–308. https://doi.org/10.1007/s11205-014-0582-1

Gonzalez Suitt, K., Franklin, C., & Kim, J. (2016). Solution-focused brief therapy with Latinos: A systematic review. Journal of Ethnic and Cultural Diversity in Social Work, 25, 50–67.
https://doi.org/10.1080/15313204.2015.1131651

Ikonomopoulos, J., Garza, K., Weiss, R., & Morales, A. (2021). Examination of treatment progress among college students in a university counseling program. Counseling Outcome Research and Evaluation, 12(1), 30–42. https://doi.org/10.1080/21501378.2020.1850175

Ikonomopoulos, J., Lenz, A. S., Guardiola, R., & Aguilar, A. (2017). Evaluation of the Outcome Questionnaire-45.2 with a Mexican-American population. Journal of Professional Counseling: Practice, Theory, & Research, 44(1), 17–32. https://doi.org/10.1080/15566382.2017.12033956

Ikonomopoulos, J., Smith, R. L., & Schmidt, C. (2015). Integrating narrative therapy within rehabilitative programming for incarcerated adolescents. Journal of Counseling & Development, 93(4), 460–470.
https://doi.org/10.1002/jcad.12044

Ikonomopoulos, J., Vela, J. C., Smith, W. D., & Dell’Aquila, J. (2016). Examining the practicum experience to increase counseling students’ self-efficacy. The Professional Counselor, 6(2), 161–173. https://doi.org/10.15241/ji.6.2.161

Joubert, J., & Guse, T. (2021). A solution-focused brief therapy (SFBT) intervention model to facilitate hope and subjective well-being among trauma survivors. Journal of Contemporary Psychotherapy, 51, 303–310. https://doi.org/10.1007/s10879-021-09511-w

Kadera, S. W., Lambert, M. J., & Andrews, A. A. (1996). How much therapy is really enough?: A session-by-session analysis of the psychotherapy dose-effect relationship. Journal of Psychotherapy Practice and Research, 5(2), 132–151.

Karaman, M. A., Vela, J. C., Aguilar, A. A., Saldana, K., & Montenegro, M. C. (2019). Psychometric properties of U.S.-Spanish versions of the grit and resilience scales with a Latinx population. International Journal for the Advancement of Counselling, 41, 125–136. https://doi.org/10.1007/s10447-018-9350-2

Kendall, P. C., Marrs-Garcia, A., Nath, S. R., & Sheldrick, R. C. (1999). Normative comparisons for the evaluation of clinical significance. Journal of Consulting and Clinical Psychology, 67(3), 285–299.
https://doi.org/10.1037/0022-006x.67.3.285

Kim, J. S. (2008). Examining the effectiveness of solution-focused brief therapy: A meta-analysis. Research on Social Work Practice, 18(2), 107–116. https://doi.org/10.1177/1049731507307807

Lambert, M. J., Burlingame, G. M., Umphress, V., Hansen, N. B., Vermeersch, D. A., Clouse, G. C., & Yanchar, S. C. (1996). The reliability and validity of the Outcome Questionnaire. Clinical Psychology & Psychotherapy, 3(4), 249–258. https://doi.org/10/bmwzbf

Lenz, A. S. (2013). Calculating effect size in single-case research: A comparison of nonoverlap methods. Measurement and Evaluation in Counseling and Development, 46(1), 64–73. https://doi.org/10.1177/0748175612456401

Lenz, A. S. (2015). Using single-case research designs to demonstrate evidence for counseling practices. Journal of Counseling & Development, 93(4), 387–393. https://doi.org/10.1002/jcad.12036

Lenz, A. S. (2020a). The future of Counseling Outcome Research and Evaluation. Counseling Outcome Research and Evaluation, 11(1), 1–3. https://doi.org/10.1080/21501378.2020.1712977

Lenz, A. S. (2020b). Estimating and reporting clinical significance in counseling research: Inferences based on percent improvement. Measurement and Evaluation in Counseling and Development, 53(4), 289–296.
https://doi.org/10.1080/07481756.2020.1784758

Lenz, A. S. (2021). Clinical significance in counseling outcome research and program evaluation. Counseling Outcome Research and Evaluation, 12(1), 1–3. https://doi.org/10.1080/21501378.2021.1877097

Lenz, A. S., Speciale, M., & Aguilar, J. V. (2012). Relational-cultural therapy intervention with incarcerated adolescents: A single-case effectiveness design. Counseling Outcome Research and Evaluation, 3(1), 17–29. https://doi.org/10.1177/2150137811435233

Lerma, E., Zamarripa, M. X., Oliver, M., & Vela, J. C. (2015). Making our way through: Voices of Hispanic counselor educators. Counselor Education and Supervision, 54(3), 162–175. https://doi.org/10.1002/ceas.12011

Liu, V. Y., La Guardia, A., & Sullivan, J. M. (2020). A single-case research evaluation of collaborative therapy treatment among adults. Counseling Outcome Research and Evaluation, 11(1), 45–58.
https://doi.org/10.1080/21501378.2018.15311238

Lundervold, D. A., & Belwood, M. F. (2000). The best kept secret in counseling: Single-case (N = 1) experimental designs. Journal of Counseling & Development, 78(1), 92–102. https://doi.org/10.1002/j.1556-6676.2000.tb02565.x

Ma, H.-H. (2006). An alternative method for quantitative synthesis of single-subject researches: Percentage of data points exceeding the median. Behavior Modification, 30(5), 598–617. https://doi.org/10.1177/0145445504272974

Mendoza, H., Masuda, A., & Swartout, K. M. (2015). Mental health stigma and self-concealment as predictors of help-seeking attitudes among Latina/o college students in the United States. International Journal for Advancement of Counselling, 37(3), 207–222. https://doi.org/10.1007/s10447-015-9237-4

Merikangas, K. R., He, J.-P., Burstein, M., Swanson, S. A., Avenevoli, S., Cui, L., Benjet, C., Georgiades, K., & Swendsen, J. (2010). Lifetime prevalence of mental disorders in U.S. adolescents: Results from the National Comorbidity Study Replication—Adolescent Supplement (NCS-A). Journal of Academy Child Adolescent Psychiatry, 49(10), 980–989. https://doi.org/10.1016/j.jaac.2010.05.017

Neipp, M.-C., Beyebach, M., Sanchez-Prada, A., & Álvarez, M. D. C. D. (2021). Solution-focused versus problem-focused questions: Differential effects of miracles, exceptions and scales. Journal of Family Therapy, 43(4), 728–747. https://doi.org/10.1111/1467-6427.12345

Nel, P., & Boshoff, A. (2014). Factorial invariance of the Adult State Hope Scale. SA Journal of Industrial Psychology, 40(1), 1–8.

Novella, J. K., Ng, K.-M., & Samuolis, J. (2020). A comparison of online and in-person counseling outcomes using solution-focused brief therapy for college students with anxiety. Journal of American College Health, 70(4), 1161–1168. https://doi.org/10.1080/07448481.2020.1786101

Oliver, M., Flamez, B., & McNichols, C. (2011). Postmodern applications with Latino/a cultures. Journal of Professional Counseling: Practice, Theory & Research, 38(3), 33–48. https://doi.org/10.1080/15566382.2011.12033875

Ponciano, C., Semma, B., Ali, S. F., Console, K., & Castillo, L. G. (2020). Institutional context of perceived discrimination, acculturative stress, and depressive symptoms among Latina college students. Journal of Latinos and Education, 1–22. https://doi.org/10.1080/15348431.2020.1809418

Priest, J. B., & Denton, W. (2012). Anxiety disorders and Latinos: The role of family cohesion and family discord. Hispanic Journal of Behavioral Sciences, 34(4), 557–575. https://doi.org/10.1177/0739986312459258

Proudlock, S., & Wellman, N. (2011). Solution focused groups: The results look promising. Counselling Psychology Review, 26(3), 45–54.

Ramos, G., Delgadillo, D., Fossum, J., Montoya, A. K., Thamrin, H., Rapp, A., Escovar, E., & Chavira, D. A. (2021). Discrimination and internalizing symptoms in rural Latinx adolescents: An ecological model of etiology. Children and Youth Services Review, 130. https://doi.org/10.1016/j.childyouth.2021.106250

Sáenz, V. B., Lu, C., Bukoski, B. E., & Rodriguez, S. (2013). Latino males in Texas community colleges: A phenomenological study of masculinity constructs and their effect on college experiences. Journal of African American Males in Education, 4(2), 82–102.

Sanchez, M. E. (2019). Perceptions of campus climate and experiences of racial microaggressions for Latinos at Hispanic-Serving Institutions. Journal of Hispanic Higher Education, 18(3), 240–253.
https://doi.org/10.1177/1538192717739

Schmit, E. L., Schmit, M. K., & Lenz, A. S. (2016). Meta-analysis of solution-focused brief therapy for treating symptoms of internalizing disorders. Counseling Outcome Research and Evaluation, 7(1), 21–39.
https://doi.org/10.1177/2150137815623836

Schuermann, H., Borsuk, C., Wong, C., & Somody, C. (2018). Evaluating effectiveness in a Hispanic-serving counselor training clinic. Counseling Outcome Research and Evaluation, 9(2), 67–79.
https://doi.org/10.1080/21501378.2018.1442680

Scruggs, T. E., & Mastropieri, M. A. (1998). Summarizing single-subject research: Issues and applications. Behavior Modification, 22(3), 221–242. https://doi.org/10.1177/01454455980223001

Scruggs, T. E., Mastropieri, M. A, & Casto, G. (1987). The quantitative synthesis of single-subject research: Methodology and validation. Remedial and Special Education, 8(2), 24–33. https://doi.org/10.1177/074193258700800206

Sharpley, C. F. (2007). So why aren’t counselors reporting n = 1 research designs? Journal of Counseling & Development, 85(3), 349–356. https://doi.org/10.1002/j.1556-6678.2007.tb00483.x

Snyder, C. R., Harris, C., Anderson, J. R., Holleran, S. A., Irving, L. M., Sigmon, S. T., Yoshinobu, J., Gibb, J., Langelle, C., & Harney, P. (1991). The will and the ways: Development and validation of an individual-differences measure of hope. Journal of Personality and Social Psychology, 60(4), 570–585.
https://doi.org/10.1037/0022-3514.60.4.570

Snyder, C. R., Michael, S. T., & Cheavens, J. S. (1999). Hope as a psychotherapeutic foundation of common factors, placebos, and expectancies. In M. A. Hubble, B. L. Duncan, & S. D. Miller (Eds.), The heart and soul of change: What works in therapy (pp. 179–200). American Psychological Association.

Snyder, C. R., Shorey, H. S., Cheavens, J., Pulvers, K. M., Adams, V. H., III, & Wiklund, C. (2002). Hope and academic success in college. Journal of Educational Psychology, 94(4), 820–826.
https://doi.org/10.1037/0022-0663.94.4.820

Snyder, C. R., Sympson, S. C., Ybasco, F. C., Borders, T. F., Babyak, M. A., & Higgins, R. L. (1996). Development and validation of the State Hope Scale. Journal of Personal Social Psychology, 70(2), 321–325.
https://doi.org/10.1037//0022-3514.70.2.321

Soares, M. C., Mondon, T. C., da Silva, G. D. G., Barbosa, L. P., Molina, M. L., Jansen, K., Souza, L. D. M., & Silva, R. A. (2018). Comparison of clinical significance of cognitive-behavioral therapy and psychodynamic therapy for major depressive disorder: A randomized clinical trial. The Journal of Nervous and Mental Disease, 206(9), 686–693. https://doi.org/10.1097/NMD.0000000000000872

Suldo, S. M., & Shaffer, E. J. (2008). Looking beyond psychopathology: The dual-factor model of mental health in youth. School Psychology Review, 37(1), 52–68. https://doi.org/10.1080/02796015.2008.12087908

Tambling, R. B. (2012). Solution-oriented therapy for survivors of sexual assault and their partners. Contemporary Family Therapy, 34, 391–401. https://doi.org/10.1007/s10591-012-9200-z

Trepper, T. S., McCollum, E. E., De Jong, P., Korman, H., Gingerich, W., & Franklin, C. (2010). Solution focused therapy treatment manual for working with individuals. Research Committee of the Solution Focused Brief Therapy Association. https://www.andrews.edu/sed/gpc/faculty-research/coffen-research/trepper_2010_solution.pdf

Umphress, V. J., Lambert, M. J., Smart, D. W., Barlow, S. H., & Glenn, C. (1997). Concurrent and construct validity of the Outcome Questionnaire. Journal of Psychoeducational Assessment, 15(1), 40–55.
https://doi.org/10.1177/073428299701500104

U.S. Census Bureau. (2020). Quick facts: United States. https://www.census.gov/quickfacts/fact/table/US/RH1725221

Vannest, K. J., & Ninci, J. (2015). Evaluating intervention effects in single-case research designs. Journal of Counseling & Development, 93(4), 403–411. https://doi.org/10.1002/jcad.12038

Vela, J. C., Ikonomopoulos, J., Dell’Aquila, J., & Vela, P. (2016). Evaluating the impact of creative journal arts therapy for survivors of intimate partner violence. Counseling Outcome Research and Evaluation, 7(2), 86–98. https://doi.org/10.1177/2150137816664781

Vela, J. C., Ikonomopoulos, J., Hinojosa, K., Gonzalez, S. L., Duque, O., & Calvillo, M. (2016). The impact of individual, interpersonal, and institutional factors on Latina/o college students’ life satisfaction. Journal of Hispanic Higher Education, 15(3), 260–276. https://doi.org/10.1177/1538192715592925

Vela, J. C., Ikonomopoulos, J., Lenz, A. S., Hinojosa, Y., & Saldana, K. (2017). Evaluation of the Meaning in Life Questionnaire and Dispositional Hope Scale with Latina/o students. Journal of Humanistic Counseling, 56(3), 166–179. https://doi.org/10.1002/johc.12051

Vela, J. C., Lerma, E., Lenz, A. S., Hinojosa, K., Hernandez-Duque, O., & Gonzalez, S. L. (2014). Positive psychology and familial factors as predictors of Latina/o students’ hope and college performance. Hispanic Journal of Behavioral Sciences, 36(4), 452–469. https://doi.org/10.1177/0739986314550790

Vela, J. C., Lu, M.-T. P., Lenz, A. S., & Hinojosa, K. (2015). Positive psychology and familial factors as predictors of Latina/o students’ psychological grit. Hispanic Journal of Behavioral Sciences, 37(3), 287–303.
https://doi.org/10.1177/0739986315588917

Vela, J. C., Lu, M.-T. P., Lenz, A. S., Savage, M. C., & Guardiola, R. (2016). Positive psychology and Mexican American college students’ subjective well-being and depression. Hispanic Journal of Behavioral Sciences, 38(3), 324–340. https://doi.org/10.1177/0739986316651618

Vela, J. C., Lu, M.-T. P., Veliz, L., Johnson, M. B., & Castro, V. (2014). Future school counselors’ perceptions of challenges that Latina/o students face: An exploratory study. In Ideas and Research You Can Use: VISTAS 2014, Article 39, 1–12. https://www.counseling.org/docs/default-source/vistas/article_39.pdf?sfvrsn=10

Vela-Gude, L., Cavazos, J., Jr., Johnson, M. B., Fielding, C., Cavazos, A. G., Campos, L., & Rodriguez, I. (2009). “My counselors were never there”: Perceptions from Latina college students. Professional School Counseling, 12(4), 272–279. https://doi.org/10.1177/2156759X0901200407

Krystle Himmelberger, MS, LPC, is a doctoral candidate at St. Mary’s University. James Ikonomopoulos, PhD, LPC-S, is an assistant professor at Texas A&M University–Corpus Christi. Javier Cavazos Vela, PhD, LPC, is a professor at the University of Texas Rio Grande Valley. Correspondence may be addressed to James Ikonomopoulos, 6300 Ocean Drive, Unit 5834, Corpus Christi, TX 78412,
james.ikonomopoulos1@tamucc.edu.

Guidelines and Recommendations for Writing a Rigorous Quantitative Methods Section in Counseling and Related Fields

Michael T. Kalkbrenner

Conducting and publishing rigorous empirical research based on original data is essential for advancing and sustaining high-quality counseling practice. The purpose of this article is to provide a one-stop-shop for writing a rigorous quantitative Methods section in counseling and related fields. The importance of judiciously planning, implementing, and writing quantitative research methods cannot be understated, as methodological flaws can completely undermine the integrity of the results. This article includes an overview, considerations, guidelines, best practices, and recommendations for conducting and writing quantitative research designs. The author concludes with an exemplar Methods section to provide a sample of one way to apply the guidelines for writing or evaluating quantitative research methods that are detailed in this manuscript.

Keywords: empirical, quantitative, methods, counseling, writing

     The findings of rigorous empirical research based on original data is crucial for promoting and maintaining high-quality counseling practice (American Counseling Association [ACA], 2014; Giordano et al., 2021; Lutz & Hill, 2009; Wester et al., 2013). Peer-reviewed publication outlets play a crucial role in ensuring the rigor of counseling research and distributing the findings to counseling practitioners. The four major sections of an original empirical study usually include: (a) Introduction/Literature Review, (b) Methods, (c) Results, and (d) Discussion (American Psychological Association [APA], 2020; Heppner et al., 2016). Although every section of a research study must be carefully planned, executed, and reported (Giordano et al., 2021), scholars have engaged in commentary about the importance of a rigorous and clearly written Methods section for decades (Korn & Bram, 1988; Lutz & Hill, 2009). The Methods section is the “conceptual epicenter of a manuscript” (Smagorinsky, 2008, p. 390) and should include clear and specific details about how the study was conducted (Heppner et al., 2016). It is essential that producers and consumers of research are aware of key methodological standards, as the quality of quantitative methods in published research can vary notably, which has serious implications for the merit of research findings (Lutz & Hill, 2009; Wester et al., 2013).

Careful planning prior to launching data collection is especially important for conducting and writing a rigorous quantitative Methods section, as it is rarely appropriate to alter quantitative methods after data collection is complete for both practical and ethical reasons (ACA, 2014; Creswell & Creswell, 2018). A well-written Methods section is also crucial for publishing research in a peer-reviewed journal; any serious methodological flaws tend to automatically trigger a decision of rejection without revisions. Accordingly, the purpose of this article is to provide both producers and consumers of quantitative research with guidelines and recommendations for writing or evaluating the rigor of a Methods section in counseling and related fields. Specifically, this manuscript includes a general overview of major quantitative methodological subsections as well as an exemplar Methods section. The recommended subsections and guidelines for writing a rigorous Methods section in this manuscript (see Appendix) are based on a synthesis of (a) the extant literature (e.g., Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Giordano et al., 2021); (b) the Standards for Educational and Psychological Testing (American Educational Research Association [AERA] et al., 2014), (c) the ACA Code of Ethics (ACA, 2014), and (d) the Journal Article Reporting Standards (JARS) in the APA 7 (2020) manual.

Quantitative Methods: An Overview of the Major Sections

The Methods section is typically the second major section in a research manuscript and can begin with an overview of the theoretical framework and research paradigm that ground the study (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). Research paradigms and theoretical frameworks are more commonly reported in qualitative, conceptual, and dissertation studies than in quantitative studies. However, research paradigms and theoretical frameworks can be very applicable to quantitative research designs (see the exemplar Methods section below). Readers are encouraged to consult Creswell and Creswell (2018) for a clear and concise overview about the utility of a theoretical framework and a research paradigm in quantitative research.

Research Design
     The research design should be clearly specified at the beginning of the Methods section. Commonly employed quantitative research designs in counseling include but are not limited to group comparisons (e.g., experimental, quasi-experimental, ex-post-facto), correlational/predictive, meta-analysis, descriptive, and single-subject designs (Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Leedy & Ormrod, 2019). A well-written literature review and strong research question(s) will dictate the most appropriate research design. Readers can refer to Flinn and Kalkbrenner (2021) for free (open access) commentary on and examples of conducting a literature review, formulating research questions, and selecting the most appropriate corresponding research design.

Researcher Bias and Reflexivity
     Counseling researchers have an ethical responsibility to minimize their personal biases throughout the research process (ACA, 2014). A researcher’s personal beliefs, values, expectations, and attitudes create a lens or framework for how data will be collected and interpreted. Researcher reflexivity or positionality statements are well-established methodological standards in qualitative research (Hays & Singh, 2012; Heppner et al., 2016; Rovai et al., 2013). Researcher bias is rarely reported in quantitative research; however, researcher bias can be just as inherently present in quantitative as it is in qualitative studies. Being reflexive and transparent about one’s biases strengthens the rigor of the research design (Creswell & Creswell, 2018; Onwuegbuzie & Leech, 2005). Accordingly, quantitative researchers should consider reflecting on their biases in similar ways as qualitative researchers (Onwuegbuzie & Leech, 2005). For example, a researcher’s topical and methodological choices are, at least in part, based on their personal interests and experiences. To this end, quantitative researchers are encouraged to reflect on and consider reporting their beliefs, assumptions, and expectations throughout the research process.

Participants and Procedures
     The major aim in the Participants and Procedures subsection of the Methods section is to provide a clear description of the study’s participants and procedures in enough detail for replication (ACA, 2014; APA, 2020; Giordano et al., 2021; Heppner et al., 2016). When working with human subjects, authors should briefly discuss research ethics including but not limited to receiving institutional review board (IRB) approval (Giordano et al., 2021; Korn & Bram, 1988). Additional considerations for the Participants and Procedures section include details about the authors’ sampling procedure, inclusion and/or exclusion criteria for participation, sample size, participant background information, location/site, and protocol for interventions (APA, 2020).

Sampling Procedure and Sample Size
     Sampling procedures should be clearly stated in the Methods section. At a minimum, the description of the sampling procedure should include researcher access to prospective participants, recruitment procedures, data collection modality (e.g., online survey), and sample size considerations. Quantitative sampling approaches tend to be clustered into either probability or non-probability techniques (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). The key distinguishing feature of probability sampling is random selection, in which all prospective participants in the population have an equal chance of being randomly selected to participate in the study (Leedy & Ormrod, 2019). Examples of probability sampling techniques include simple random sampling, systematic random sampling, stratified random sampling, or cluster sampling (Leedy & Ormrod, 2019).

Non-probability sampling techniques lack random selection and there is no way of determining if every member of the population had a chance of being selected to participate in the study (Leedy & Ormrod, 2019). Examples of non-probability sampling procedures include volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, and matched sampling. In quantitative research, probability sampling procedures are more rigorous in terms of generalizability (i.e., the extent to which research findings based on sample data extend or generalize to the larger population from which the sample was drawn). However, probability sampling is not always possible and non-probability sampling procedures are rigorous in their own right. Readers are encouraged to review Leedy and Ormrod’s (2019) commentary on probability and non-probability sampling procedures. Ultimately, the selection of a sampling technique should be made based on the population parameters, available resources, and the purpose and goals of the study.

     A Priori Statistical Power Analysis. It is essential that quantitative researchers determine the minimum necessary sample size for computing statistical analyses before launching data collection (Balkin & Sheperis, 2011; Sink & Mvududu, 2010). An insufficient sample size substantially increases the probability of committing a Type II error, which occurs when the results of statistical testing reveal non–statistically significant findings when in reality (of which the researcher is unaware), significant findings do exist. Computing an a priori (computed before starting data collection) statistical power analysis reduces the chances of a Type II error by determining the smallest sample size that is necessary for finding statistical significance, if statistical significance exists (Balkin & Sheperis, 2011). Readers can consult Balkin and Sheperis (2011) as well as Sink and Mvududu (2010) for an overview of statistical significance, effect size, and statistical power. A number of statistical power analysis programs are available to researchers. For example, G*Power (Faul et al., 2009) is a free software program for computing a priori statistical power analyses.

Sampling Frame and Location
     Counselors should report their sampling frame (total number of potential participants), response rate, raw sample (total number of participants that engaged with the study at any level, including missing and incomplete data), and the size of the final useable sample. It is also important to report the breakdown of the sample by demographic and other important participant background characteristics, for example, “XX.X% (n = XXX) of participants were first-generation college students, XX.X% (n = XXX) were second-generation . . .” The selection of demographic variables as well as inclusion and exclusion criteria should be justified in the literature review. Readers are encouraged to consult Creswell and Creswell (2018) for commentary on writing a strong literature review.

The timeframe, setting, and location during which data were collected are important methodological considerations (APA, 2020). Specific names of institutions and agencies should be masked to protect their privacy and confidentiality; however, authors can give descriptions of the setting and location (e.g., “Data were collected between April 2021 to February 2022 from clients seeking treatment for addictive disorders at an outpatient, integrated behavioral health care clinic located in the Northeastern United States.”). Authors should also report details about any interventions, curriculum, qualifications and background information for research assistants, experimental design protocol(s), and any other procedural design issues that would be necessary for replication. In instances in which describing a treatment or conditions becomes exorbitant (e.g., step-by-step manualized therapy, programs, or interventions), researchers can include footnotes, appendices, and/or references to refer the reader to more information about the intervention protocol.

Missing Data
     Procedures for handling missing values (incomplete survey responses) are important considerations in quantitative data analysis. Perhaps the most straightforward option for handling missing data is to simply delete missing responses. However, depending on the percentage of data that are missing and how the data are missing (e.g., missing completely at random, missing at random, or not missing at random), data imputation techniques can be employed to recover missing values (Cook, 2021; Myers, 2011). Quantitative researchers should provide a clear rationale behind their decisions around the deletion of missing values or when using a data imputation method. Readers are encouraged to review Cook’s (2021) commentary on procedures for handling missing data in quantitative research.

Measures
     Counseling and other social science researchers oftentimes use instruments and screening tools to appraise latent traits, which can be defined as variables that are inferred rather than observed (AERA et al., 2014). The purpose of the Measures (aka Instrumentation) section is to operationalize the construct(s) of measurement (Heppner et al., 2016). Specifically, the Measures subsection of the Methods in a quantitative manuscript tends to include a presentation of (a) the instrument and construct(s) of measurement, (b) reliability and validity evidence of test scores, and (c) cross-cultural fairness and norming. The Measures section might also include a Materials subsection for studies that employed data-gathering techniques or equipment besides or in addition to instruments (Heppner et al., 2016); for instance, if a research study involved the use of a biofeedback device to collect data on changes in participants’ body functions.

Instrument and Construct of Measurement
     Begin the Measures section by introducing the questionnaire or screening tool, its construct(s) of measurement, number of test items, example test items, and scale points. If applicable, the Measures section can also include information on scoring procedures and cutoff criterion; for example, total score benchmarks for low, medium, and high levels of the trait. Authors might also include commentary about how test scores will be operationalized to constitute the variables in the upcoming Data Analysis section.

Reliability and Validity Evidence of Test Scores
     Reliability evidence involves the degree to which test scores are stable or consistent and validity evidence refers to the extent to which scores on a test succeed in measuring what the test was designed to measure (AERA et al., 2014; Bardhoshi & Erford, 2017). Researchers should report both reliability and validity evidence of scores for each instrument they use (Wester et al., 2013). A number of forms of reliability evidence exist (e.g., internal consistency, test-retest, interrater, and alternate/parallel/equivalent forms) and the AERA standards (2014) outline five forms of validity evidence. For the purposes of this article, I will focus on internal consistency reliability, as it is the most popular and most commonly misused reliability estimate in social sciences research (Kalkbrenner, 2021a; McNeish, 2018), as well as construct validity. The psychometric properties of a test (including reliability and validity evidence) are contingent upon the scores from which they were derived. As such, no test is inherently valid or reliable; test scores are only reliable and valid for a certain purpose, at a particular time, for use with a specific sample. Accordingly, authors should discuss reliability and validity evidence in terms of scores, for example, “Stamm (2010) found reliability and validity evidence of scores on the Professional Quality of Life (ProQOL 5) with a sample of . . . ”

Internal Consistency Reliability Evidence. Internal consistency estimates are derived from associations between the test items based on one administration (Kalkbrenner, 2021a). Cronbach’s coefficient alpha (α) is indisputably the most popular internal consistency reliability estimate in counseling and throughout social sciences research in general (Kalkbrenner, 2021a; McNeish, 2018). The appropriate use of coefficient alpha is reliant on the data meeting the following statistical assumptions: (a) essential tau equivalence, (b) continuous level scale of measurement, (c) normally distributed data, (d) uncorrelated error, (e) unidimensional scale, and (f) unit-weighted scaling (Kalkbrenner, 2021a). For decades, coefficient alpha has been passed down in the instructional practice of counselor training programs. Coefficient alpha has appeared as the dominant reliability index in national counseling and psychology journals without most authors computing and reporting the necessary statistical assumption checking (Kalkbrenner, 2021a; McNeish, 2018). The psychometrically daunting practice of using alpha without assumption checking poses a threat to the veracity of counseling research, as the accuracy of coefficient alpha is threatened if the data violate one or more of the required assumptions.

Internal Consistency Reliability Indices and Their Appropriate Use. Composite reliability (CR)
internal consistency estimates are derived in similar ways as coefficient alpha; however, the proper computation of CRs is not reliant on the data meeting many of alpha’s statistical assumptions (Kalkbrenner, 2021a; McNeish, 2018). For example, McDonald’s coefficient omega (ω or ωt) is a CR estimate that is not dependent on the data meeting most of alpha’s assumptions (Kalkbrenner, 2021a). In addition, omega hierarchical (ωh) and coefficient H are CR estimates that can be more advantageous than alpha. Despite the utility of CRs, their underuse in research practice is historically, in part, because of the complex nature of computation. However, recent versions of SPSS include a breakthrough point-and-click feature for computing coefficient omega as easily as coefficient alpha. Readers can refer to the SPSS user guide for steps to compute omega.

Guidelines for Reporting Internal Consistency Reliability. In the Measures subsection of the Methods section, researchers should report existing reliability evidence of scores for their instruments. This can be done briefly by reporting the results of multiple studies in the same sentence, as in: “A number of past investigators found internal consistency reliability evidence for scores on the [name of test] with a number of different samples, including college students (α =. XX, ω =. XX; Authors et al., 20XX), clients living with chronic back pain (α =. XX, ω =. XX; Authors et al., 20XX), and adults in the United States (α = . XX, ω =. XX; Authors et al., 20XX) . . .”

Researchers should also compute and report reliability estimates of test scores with their data set in the Measures section. If a researcher is using coefficient alpha, they have a duty to complete and report assumption checking to demonstrate that the properties of their sample data were suitable for alpha (Kalkbrenner, 2021a; McNeish, 2018). Another option is to compute a CR (e.g., ω or H) instead of alpha. However, Kalkbrenner (2021a) recommended that researchers report both coefficient alpha (because of its popularity) and coefficient omega (because of the robustness of the estimate). The proper interpretation of reliability estimates of test scores is done on a case-by-case basis, as the meaning of reliability coefficients is contingent upon the construct of measurement and the stakes or consequences of the results for test takers (Kalkbrenner, 2021a). The following tentative interpretative guidelines for adults’ scores on attitudinal measures were offered by Kalkbrenner (2021b) for coefficient alpha: α < .70 = poor, α > .70 to .84 = acceptable, α > .85 = strong; and for coefficient omega: ω < .65 = poor, ω > .65 to .80 = acceptable, ω > .80 = strong. It is important to note that these thresholds are for adults’ scores on attitudinal measures; acceptable internal consistency reliability estimates of scores should be much stronger for high-stakes testing.

     Construct Validity Evidence of Test Scores. Construct validity involves the test’s ability to accurately capture a theoretical or latent construct (AERA et al., 2014). Construct validity considerations are particularly important for counseling researchers who tend to investigate latent traits as outcome variables. At a minimum, counseling researchers should report construct validity evidence for both internal structure and relations with theoretically relevant constructs. Internal structure (aka factorial validity) is a source of construct validity that represents the degree to which “the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” (AERA et al., 2014, p. 16). Readers can refer to Kalkbrenner (2021b) for a free (open access publishing) overview of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) that is written in layperson’s terms. Relations with theoretically relevant constructs (e.g., convergent and divergent validity) are another source of construct validity evidence that involves comparing scores on the test in question with scores on other reputable tests (AERA et al., 2014; Strauss & Smith, 2009).

     Guidelines for Reporting Validity Evidence. Counseling researchers should report existing evidence of at least internal structure and relations with theoretically relevant constructs (e.g., convergent or divergent validity) for each instrument they use. EFA results alone are inadequate for demonstrating internal structure validity evidence of scores, as EFA is a much less rigorous test of internal structure than CFA (Kalkbrenner, 2021b). In addition, EFA results can reveal multiple retainable factor solutions, which need to be tested/confirmed via CFA before even initial internal structure validity evidence of scores can be established. Thus, both EFA and CFA are necessary for reporting/demonstrating initial evidence of internal structure of test scores. In an extension of internal structure, counselors should also report existing convergent and/or divergent validity of scores. High correlations (r > .50) demonstrate evidence of convergent validity and moderate-to-low correlations (r < .30, preferably r < .10) support divergent validity evidence of scores (Sink & Stroh, 2006; Swank & Mullen, 2017).

In an ideal situation, a researcher will have the resources to test and report the internal structure (e.g., compute CFA firsthand) of scores on the instrumentation with their sample. However, CFA requires large sample sizes (Kalkbrenner, 2021b), which oftentimes is not feasible. It might be more practical for researchers to test and report relations with theoretically relevant constructs, though adding one or more questionnaire(s) to data collection efforts can come with the cost of increasing respondent fatigue. In these instances, researchers might consider reporting other forms of validity evidence (e.g., evidence based on test content, criterion validity, or response processes; AERA et al., 2014). In instances when computing firsthand validity evidence of scores is not logistically viable, researchers should be transparent about this limitation and pay especially careful attention to presenting evidence for cross-cultural fairness and norming.

Cross-Cultural Fairness and Norming
     In a psychometric context, fairness (sometimes referred to as cross-cultural fairness) is a fundamental validity issue and a complex construct to define (AERA et al., 2014; Kane, 2010; Neukrug & Fawcett, 2015). I offer the following composite definition of cross-cultural fairness for the purposes of a quantitative Measures section: the degree to which test construction, administration procedures, interpretations, and uses of results are equitable and represent an accurate depiction of a diverse group of test takers’ abilities, achievement, attitudes, perceptions, values, and/or experiences (AERA et al., 2014; Educational Testing Service [ETS], 2016; Kane, 2010; Kane & Bridgeman, 2017). Counseling researchers should consider the following central fairness issues when selecting or developing instrumentation: measurement bias, accessibility, universal design, equivalent meaning (invariance), test content, opportunity to learn, test adaptations, and comparability (AERA et al., 2014; Kane & Bridgeman, 2017). Providing a comprehensive overview of fairness is beyond the scope of this article; however, readers are encouraged to read Chapter 3 in the AERA standards (2014) on Fairness in Testing.

In the Measures section, counseling researchers should include commentary on how and in what ways cross-cultural fairness guided their selection, administration, and interpretation of procedures and test results (AERA et al., 2014; Kalkbrenner, 2021b). Cross-cultural fairness and construct validity are related constructs (AERA et al., 2014). Accordingly, citing construct validity of test scores (see the previous section) with normative samples similar to the researcher’s target population is one way to provide evidence of cross-cultural fairness. However, construct validity evidence alone might not be a sufficient indication of cross-cultural fairness, as the latent meaning of test scores are a function of test takers’ cultural context (Kalkbrenner, 2021b). To this end, when selecting instrumentation, researchers should review original psychometric studies and consider the normative sample(s) from which test scores were derived.

Commentary on the Danger of Using Self-Developed and Untested Scales
     Counseling researchers have an ethical duty to “carefully consider the validity, reliability, psychometric limitations, and appropriateness of instruments when selecting assessments” (ACA, 2014, p. 11). Quantitative researchers might encounter instances in which a scale is not available to measure their desired construct of measurement (latent/inferred variable). In these cases, the first step in the line of research is oftentimes to conduct an instrument development and score validation study (AERA et al., 2014; Kalkbrenner, 2021b). Detailing the protocol for conducting psychometric research is outside the scope of this article; however, readers can refer to the MEASURE Approach to Instrument Development (Kalkbrenner, 2021c) for a free (open access publishing) overview of the steps in an instrument development and score validation study. Adapting an existing scale can be option in lieu of instrument development; however, according to the AERA standards (2014), “an index that is constructed by manipulating and combining test scores should be subjected to the same validity, reliability, and fairness investigations that are expected for the test scores that underlie the index” (p. 210). Although it is not necessary that all quantitative researchers become psychometricians and conduct full-fledged psychometric studies to validate scores on instrumentation, researchers do have a responsibility to report evidence of the reliability, validity, and cross-cultural fairness of test scores for each instrument they used. Without at least initial construct validity testing of scores (calibration), researchers cannot determine what, if anything at all, an untested instrument actually measures.

Data Analysis
     Counseling researchers should report and explain the selection of their data analytic procedures (e.g., statistical analyses) in a Data Analysis (or Statistical Analysis) subsection of the Methods or Results section (Giordano et al., 2021; Leedy & Ormrod, 2019). The placement of the Data Analysis section in either the Methods or Results section can vary between publication outlets; however, this section tends to include commentary on variables, statistical models and analyses, and statistical assumption checking procedures.

Operationalizing Variables and Corresponding Statistical Analyses
     Clearly outlining each variable is an important first step in selecting the most appropriate statistical analysis for answering each research question (Creswell & Creswell, 2018). Researchers should specify the independent variable(s) and corresponding levels as well as the dependent variable(s); for example, “The first independent variable, time, was composed of the three following levels: pre, middle, and post. The dependent variables were participants’ scores on the burnout and compassion satisfaction subscales of the ProQOL 5.” After articulating the variables, counseling researchers are tasked with identifying each variable’s scale of measurement (Creswell & Creswell, 2018; Field, 2018; Flinn & Kalkbrenner, 2021). Researchers can select the most appropriate statistical test(s) for answering their research question(s) based on the scale of measurement for each variable and referring to Table 8.3 on page 159 in Creswell and Creswell (2018), Figure 1 in Flinn and Kalkbrenner (2021), or the chart on page 1072 in Field (2018).

Assumption Checking
     Statistical analyses used in quantitative research are derived based on a set of underlying assumptions (Field, 2018; Giordano et al., 2021). Accordingly, it is essential that quantitative researchers outline their protocol for testing their sample data for the appropriate statistical assumptions. Assumptions of common statistical tests in counseling research include normality, absence of outliers (multivariate and/or univariate), homogeneity of covariance, homogeneity of regression slopes, homoscedasticity, independence, linearity, and absence of multicollinearity (Flinn & Kalkbrenner, 2021; Giordano et al., 2021). Readers can refer to Figure 2 in Flinn and Kalkbrenner (2021) for an overview of statistical assumptions for the major statistical analyses in counseling research.

Exemplar Quantitative Methods Section

The following section includes an exemplar quantitative methods section based on a hypothetical example and a practice data set. Producers and consumers of quantitative research can refer to the following section as an example for writing their own Methods section or for evaluating the rigor of an existing Methods section. As stated previously, a well-written literature review and research question(s) are essential for grounding the study and Methods section (Flinn & Kalkbrenner, 2021). The final piece of a literature review section is typically the research question(s). Accordingly, the following research question guided the following exemplar Methods section: To what extent are there differences in anxiety severity between college students who participate in deep breathing exercises with progressive muscle relaxation, group exercise program, or both group exercise and deep breathing with progressive muscle relaxation?

——-Exemplar——-

Methods

A quantitative group comparison research design was employed based on a post-positivist philosophy of science (Creswell & Creswell, 2018). Specifically, I implemented a quasi-experimental, control group pretest/posttest design to answer the research question (Leedy & Ormrod, 2019). Consistent with a post-positivist philosophy of science, I reflected on pursuing a probabilistic objective answer that is situated within the context of imperfect and fallible evidence. The rationale for the present study was grounded in Dr. David Servan-Schreiber’s (2009) theory of lifestyle practices for integrated mental and physical health. According to Servan-Schreiber, simultaneously focusing on improving one’s mental and physical health is more effective than focusing on either physical health or mental wellness in isolation. Consistent with Servan-Schreiber’s theory, the aim of the present study was to compare the utility of three different approaches for anxiety reduction: a behavioral approach alone, a physiological approach alone, and a combined behavioral approach and physiological approach.

I am in my late 30s and identify as a White man. I have a PhD in counselor education as well as an MS in clinical mental health counseling. I have a deep belief in and an active line of research on the utility of total wellness (combined mental and physical health). My research and clinical experience have informed my passion and interest in studying the utility of integrated physical and psychological health services. More specifically, my personal beliefs, values, and interest in total wellness influenced my decision to conduct the present study. I carefully followed the procedures outlined below to reduce the chances that my personal values biased the research design.

Participants and Procedures
     Data collection began following approval from the IRB. Data were collected during the fall 2022 semester from undergraduate students who were at least 18 years or older and enrolled in at least one class at a land grant, research-intensive university located in the Southwestern United States. An a priori statistical power analysis was computed using G*Power (Faul et al., 2009). Results revealed that a sample size of at least 42 would provide an 80% power estimate, α = .05, with a moderate effect size, f = 0.25.

I obtained an email list from the registrar’s office of all students enrolled in a section of a Career Excellence course, which was selected to recruit students in a variety of academic majors because all undergraduate students in the College of Education are required to take this course. The focus of this study (mental and physical wellness) was also consistent with the purpose of the course (success in college). A non-probability, convenience sampling procedure was employed by sending a recruitment message to students’ email addresses via the Qualtrics online survey platform. The response rate was approximately 15%, with a total of 222 prospective participants indicating their interest in the study by clicking on the electronic recruitment link, which automatically sent them an invitation to attend an information session about the study. One hundred forty-four students showed up for the information session, 129 of which provided their voluntary informed consent to enroll in the study. Participants were given a confidential identification number to track their pretest/posttest responses, and then they completed the pretest (see the Measures section below). Respondents were randomly assigned in equal groups to either (a) deep breathing with progressive muscle relaxation condition, (b) group exercise condition, or (c) both exercise and deep breathing with progressive muscle relaxation condition.

A missing values analysis showed that less than 5% of data was missing for all cases. Expectation maximization was used to impute missing values, as Little’s Missing Completely at Random (MCAR) test revealed that the data could be treated as MCAR (p = .367). Data from five participants who did not return to complete the posttest at the end of the semester were removed, yielding a robust sample of N = 124. Participants (N = 124) ranged in age from 18 to 33 (M = 21.64, SD = 3.70). In terms of gender identity, 65.0% (n = 80) self-identified as female, 32.2% (n = 40) as male, 0.8% (n = 1) as transgender, and 2.4% (n = 3) did not specify their gender identity. For ethnic identity, 50.0% (n = 62) identified as White, 26.7% (n = 33) as Latinx, 12.1% (n = 15) as Asian, 9.6% (n = 12) as Black, 0.8% (n = 1) as Alaskan Native, and 0.8% (n = 1) did not specify their ethnic identity. In terms of generational status, 36.3% (n = 45) of participants were first-generation college students and 63.7% (n = 79) were second-generation or beyond.

Group Exercise and Deep Breathing Programs
     I was awarded a small grant to offer on-campus deep breathing with progressive muscle relaxation and group exercise programs. The structure of the group exercise program was based on Patterson et al. (2021), which consisted of more than 50 available exercise classes each week (e.g., cycling, yoga, swimming, dance). There was no limit to the number of classes that participants could attend; however, attending at least one class each week was required for participation in the study. Readers can refer to Patterson et al. for more information about the group exercise programming.

Neeru et al.’s (2015) deep breathing and progressive muscle relaxation programming was used in the present study. Participants completed daily deep breathing and Jacobson Progressive Muscle Relaxation (JPMR). JPMR was selected because of its documented success with treating anxiety disorders (Neeru et al., 2015). Specifically, the program consisted of four deep breathing steps completed five times and JPMR for approximately 25 minutes daily. Participants attended a weekly deep breathing and JPMR session facilitated by a licensed professional counselor. Participants also practiced deep breathing and JPMR on their own daily and kept a log to document their practice sessions. Readers can refer to Neeru et al. for more information about JPMR and the deep breathing exercises.

Measures
     Prospective participants read an informed consent statement and indicated their voluntary informed consent by clicking on a checkbox. Next, participants confirmed that they met the following inclusion criteria: (a) at least 18 years old and (b) currently enrolled in at least one undergraduate college class. The instrumentation began with demographic items regarding participants’ gender identity, ethnic identity, age, and confidential identification number to track their pretest and posttest scores. Lastly, participants completed a convergent validity measure (Mental Health Inventory – 5) and the Generalized Anxiety Disorder (GAD)-7 to measure the outcome variable (anxiety severity).

Reliability and Validity Evidence of Test Scores
     Tests of internal consistency were computed to test the reliability of scores on the screening tool for appraising anxiety severity with undergraduate students in the present sample. For internal consistency reliability of scores, coefficient alpha (α) and coefficient omega (ω) were computed with the following minimum thresholds for adults’ scores on attitudinal measures: α > .70 and ω > .65, based on the recommendations of Kalkbrenner (2021b).

The Mental Health Inventory–5. Participants completed the Mental Health Inventory (MHI)-5 to test the convergent validity of undergraduate students in the present samples’ scores on the GAD-7, which was used to measure the outcome variable in this study, anxiety severity. The MHI-5 is a 5-item measure for appraising overall mental health (Berwick et al., 1991). Higher MHI-5 scores reflect better mental health. Participants responded to test items (example: “How much of the time, during the past month, have you been a very nervous person?”) on the following Likert-type scale: 0 = none of the time, 1 = a little of the time, 2 = some of the time, 3 = a good bit of the time, 4 = most of the time, or 5 = all of the time. The MHI-5 has particular utility as a convergent validity measure because of its brief nature (5 items) coupled with the myriad of support for its psychometric properties (e.g., Berwick et al., 1991; Rivera-Riquelme et al., 2019; Thorsen et al., 2013). As just a few examples, Rivera-Riquelme et al. (2019) found acceptable internal consistency reliability evidence (α = .71, ω = .78) and internal structure validity evidence of MHI-5 scores. In addition, the findings of Thorsen et al. (2013) demonstrated convergent validity evidence of MHI-5 scores. Findings in the extant literature (e.g., Foster et al., 2016; Vijayan & Joseph, 2015) established an inverse relationship between anxiety and mental health. Thus, a strong negative correlation (r > −.50; Sink & Stroh, 2006) between the MHI-5 and GAD-7 would support convergent validity evidence of scores.

     The Generalized Anxiety Disorder–7. The GAD-7 is a 7-item screening tool for appraising anxiety severity (Spitzer et al., 2006). Participants respond to test items based on the following prompt: “Over the last 2 weeks, how often have you been bothered by the following problems?” and anchor definitions: 0 = not at all, 1 = several days, 2 = more than half the days, or 3 = nearly every day (Spitzer et al., 2006, p. 1739). Sample test items include “being so restless that it’s hard to sit still” and “feeling afraid as if something awful might happen.” The GAD-7 items can be summed into an interval-level composite score, with higher scores indicating greater levels of Anxiety Severity. GAD-7 scores can range from 0 to 21 and are classified as mild (0–5), moderate (6–10), moderately severe (11–15), or severe (16–21).

In the initial score validation study, Spitzer et al. (2006) found evidence for internal consistency (α = .92) and test-retest reliability (intraclass correlation = .83) of GAD-7 scores among adults in the United States who were receiving services in primary care clinics. In more recent years, a number of additional investigators found internal consistency reliability evidence for GAD-7 scores, including samples of undergraduate college students in the southern United States (α = .91; Sriken et al., 2022), Black and Latinx adults in the United States (α = .93, ω = .93; Kalkbrenner, 2022), and English-speaking college students living in Ethiopia (ω = .77; Manzar et al., 2021). Similarly, the data set in the present study displayed acceptable internal consistency reliability evidence for GAD-7 scores (α = .82, ω = .81).

Spitzer et al. (2006) used factor analysis to establish internal structure validity, correlations with established screening tools for convergent validity, and criterion validity evidence by demonstrating the capacity of GAD-7 scores for detecting likely cases of generalized anxiety disorder. A number of subsequent investigators found internal structure validity evidence of GAD-7 scores via CFA and multiple-group CFA (Kalkbrenner, 2022; Sriken et al., 2022). In addition, the findings of Sriken et al. (2022) supported both the convergent and divergent validity of GAD-7 scores with other established tests. The data set in the present study (N = 124) was not large enough for internal structure validity testing. However, a strong negative correlation (r = −.78) between the GAD-7 and MHI-5 revealed convergent validity evidence of GAD-7 scores with the present sample of undergraduate students.

In terms of norming and cross-cultural fairness, there were qualitative differences between the normative GAD-7 sample in the original score validation study (adults in the United States receiving services in primary care clinics) and the non-clinical sample of young adult college students in the present study. However, the demographic profile of the present sample is consistent with Sriken et al. (2022), who validated GAD-7 scores with a large sample (N = 414) of undergraduate college students. For example, the demographic profile of the sample in the current study for gender identity closely resembled the composition of Sriken et al.’s sample, which included 66.7% women, 33.1% men, and 0.2% transgender individuals. In terms of ethnic identity, the demographic profile of the present sample was consistent with Sriken et al. for White and Black participants, although the present sample reflected a somewhat smaller proportion of Asian students (19.6%) and a greater proportion of Latinx students (5.3%).

Data Analysis and Assumption Checking
     The present study included two categorical-level independent variables and one continuous-level dependent variable. The first independent variable, program, consisted of three levels: (a) deep breathing with progressive muscle relaxation, (b) group exercise, or (c) both exercise and deep breathing with progressive muscle relaxation. The second independent variable, time, consisted of two levels: the beginning of the semester and the end of the semester. The dependent variable was participants’ interval-level score on the GAD-7. Accordingly, a 3 (program) X 2 (time) mixed-design analysis of variance (ANOVA) was the most appropriate statistical test for answering the research question (Field, 2018).

The data were examined for the following statistical assumptions for a mixed-design ANOVA: absence of outliers, normality, homogeneity of variance, and sphericity of the covariance matrix based on the recommendations of Field (2018). Standardized z-scores revealed an absence of univariate outliers (z > 3.29). A review of skewness and kurtosis values were highly consistent with a normal distribution, with the majority of values less than ± 1.0. The results of a Levene’s test demonstrated that the data met the assumption of homogeneity of variance, F(2, 121) = 0.73, p = .486. Testing the data for sphericity was not applicable in this case, as the within-subjects IV (time) only comprised two levels.

——-End Exemplar——-

Conclusion

The current article is a primer on guidelines, best practices, and recommendations for writing or evaluating the rigor of the Methods section of quantitative studies. Although the major elements of the Methods section summarized in this manuscript tend to be similar across the national peer-reviewed counseling journals, differences can exist between journals based on the content of the article and the editorial board members’ preferences. Accordingly, it can be advantageous for prospective authors to review recently published manuscripts in their target journal(s) to look for any similarities in the structure of the Methods (and other sections). For instance, in one journal, participants and procedures might be reported in a single subsection, whereas in other journals they might be reported separately. In addition, most journals post a list of guidelines for prospective authors on their websites, which can include instructions for writing the Methods section. The Methods section might be the most important section in a quantitative study, as in all likelihood methodological flaws cannot be resolved once data collection is complete, and serious methodological flaws will compromise the integrity of the entire study, rendering it unpublishable. It is also essential that consumers of quantitative research can proficiently evaluate the quality of a Methods section, as poor methods can make the results meaningless. Accordingly, the significance of carefully planning, executing, and writing a quantitative research Methods section cannot be understated.

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

References

American Counseling Association. (2014). ACA code of ethics.

American Psychological Association. (2020). Publication manual of the American Psychological Association: The official guide to APA style (7th ed.).

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). The standards for educational and psychological testing.
https://www.aera.net/Publications/Books/Standards-for-Educational-Psychological-Testing-2014-Edition

Balkin, R. S., & Sheperis, C. J. (2011). Evaluating and reporting statistical power in counseling research. Journal of Counseling & Development, 89(3), 268–272. https://doi.org/10.1002/j.1556-6678.2011.tb00088.x

Bardhoshi, G., & Erford, B. T. (2017). Processes and procedures for estimating score reliability and precision. Measurement and Evaluation in Counseling and Development, 50(4), 256–263.
https://doi.org/10.1080/07481756.2017.1388680

Berwick, D. M., Murphy, J. M., Goldman, P. A., Ware, J. E., Jr., Barsky, A. J., & Weinstein, M. C. (1991). Performance of a five-item mental health screening test. Medical Care, 29(2), 169–176.
https://doi.org/10.1097/00005650-199102000-00008

Cook, R. M. (2021). Addressing missing data in quantitative counseling research. Counseling Outcome Research and Evaluation, 12(1), 43–53. https://doi.org/10.1080/21501378.2019.171103

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.

Educational Testing Service. (2016). ETS international principles for fairness review of assessments: A manual for developing locally appropriate fairness review guidelines for various countries. https://www.ets.org/content/dam/ets-org/pdfs/about/fairness-review-international.pdf

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160.
https://doi.org/10.3758/BRM.41.4.1149

Field, A. (2018). Discovering statistics using IBM SPSS Statistics (5th ed.). SAGE.

Flinn, R. E., & Kalkbrenner, M. T. (2021). Matching variables with the appropriate statistical tests in counseling research. Teaching and Supervision in Counseling, 3(3), Article 4. https://doi.org/10.7290/tsc030304

Foster, T., Steen, L., O’Ryan, L., & Nelson, J. (2016). Examining how the Adlerian life tasks predict anxiety in first-year counseling students. The Journal of Individual Psychology, 72(2), 104–120. https://doi.org/10.1353/jip.2016.0009

Giordano, A. L., Schmit, M. K., & Schmit, E. L. (2021). Best practice guidelines for publishing rigorous research in counseling. Journal of Counseling & Development, 99(2), 123–133. https://doi.org/10.1002/jcad.12360

Hays, D. G., & Singh, A. A. (2012). Qualitative inquiry in clinical and educational settings. Guilford.

Heppner, P. P., Wampold, B. E., Owen, J., Wang, K. T., & Thompson, M. N. (2016). Research design in counseling (4th ed.). Cengage.

Kalkbrenner, M. T. (2021a). Alpha, omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation.
https://doi.org/10.1080/21501378.2021.1940118

Kalkbrenner, M. T. (2021b). Enhancing assessment literacy in professional counseling: A practical overview of factor analysis. The Professional Counselor, 11(3), 267–284. https://doi.org/10.15241/mtk.11.3.267

Kalkbrenner, M. T. (2021c). A practical guide to instrument development and score validation in the social sciences: The MEASURE Approach. Practical Assessment, Research & Evaluation, 26(1), Article 1.
https://doi.org/10.7275/svg4-e671

Kalkbrenner, M. T. (2022). Validation of scores on the Lifestyle Practices and Health Consciousness Inventory with Black and Latinx adults in the United States: A three-dimensional model. Measurement and Evaluation in Counseling and Development, 55(2), 84–97. https://doi.org/10.1080/07481756.2021.1955214

Kane, M. (2010). Validity and fairness. Language Testing, 27(2), 177–182. https://doi.org/10.1177/0265532209349467

Kane, M., & Bridgeman, B. (2017). Research on validity theory and practice at ETS. In R. E. Bennett & M. von Davier (Eds.), Advancing human assessment: The methodological, psychological and policy contributions of ETS (pp. 489–552). Springer. https://doi.org/10.1007/978-3-319-58689-2_16

Korn, J. H., & Bram, D. R. (1988). What is missing in the Method section of APA journal articles? American Psychologist, 43(12), 1091–1092. https://doi.org/10.1037/0003-066X.43.12.1091

Leedy, P. D., & Ormrod, J. E. (2019). Practical research: Planning and design (12th ed.). Pearson.

Lutz, W., & Hill, C. E. (2009). Quantitative and qualitative methods for psychotherapy research: Introduction to special section. Psychotherapy Research, 19(4–5), 369–373. https://doi.org/10.1080/10503300902948053

Manzar, M. D., Alghadir, A. H., Anwer, S., Alqahtani, M., Salahuddin, M., Addo, H. A., Jifar, W. W., & Alasmee, N. A. (2021). Psychometric properties of the General Anxiety Disorders-7 Scale using categorical data methods: A study in a sample of university attending Ethiopian young adults. Neuropsychiatric Disease and Treatment, 17(1), 893–903. https://doi.org/10.2147/NDT.S295912

McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412–433. https://doi.org/10.1037/met0000144

Myers, T. A. (2011). Goodbye, listwise deletion: Presenting hot deck imputation as an easy and effective tool for handling missing data. Communication Methods and Measures, 5(4), 297–310.
https://doi.org/10.1080/19312458.2011.624490

Neeru, Khakha, D. C., Satapathy, S., & Dey, A. B. (2015). Impact of Jacobson Progressive Muscle Relaxation (JPMR) and deep breathing exercises on anxiety, psychological distress and quality of sleep of hospitalized older adults. Journal of Psychosocial Research, 10(2), 211–223.

Neukrug, E. S., & Fawcett, R. C. (2015). Essentials of testing and assessment: A practical guide for counselors, social workers, and psychologists (3rd ed.). Cengage.

Onwuegbuzie, A. J., & Leech, N. L. (2005). On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. International Journal of Social Research Methodology, 8(5), 375–387. https://doi.org/10.1080/13645570500402447

Patterson, M. S., Gagnon, L. R., Vukelich, A., Brown, S. E., Nelon, J. L., & Prochnow, T. (2021). Social networks, group exercise, and anxiety among college students. Journal of American College Health, 69(4), 361–369. https://doi.org/10.1080/07448481.2019.1679150

Rivera-Riquelme, M., Piqueras, J. A., & Cuijpers, P. (2019). The Revised Mental Health Inventory-5 (MHI-5) as an ultra-brief screening measure of bidimensional mental health in children and adolescents. Psychiatry Research, 247(1), 247–253. https://doi.org/10.1016/j.psychres.2019.02.045

Rovai, A. P., Baker, J. D., & Ponton, M. K. (2013). Social science research design and statistics: A practitioner’s guide to research methods and SPSS analysis. Watertree Press.

Servan-Schreiber, D. (2009). Anticancer: A new way of life (3rd ed.). Viking Publishing.

Sink, C. A., & Mvududu, N. H. (2010). Statistical power, sampling, and effect sizes: Three keys to research relevancy. Counseling Outcome Research and Evaluation, 1(2), 1–18. https://doi.org/10.1177/2150137810373613

Sink, C. A., & Stroh, H. R. (2006). Practical significance: The use of effect sizes in school counseling research. Professional School Counseling, 9(5), 401–411. https://doi.org/10.1177/2156759X0500900406

Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication, 25(3), 389–411. https://doi.org/10.1177/0741088308317815

Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Löwe, B. (2006). A brief measure for assessing Generalized Anxiety Disorder: The GAD-7. Archives of Internal Medicine, 166(10), 1092–1097.
https://doi.org/10.1001/archinte.166.10.1092

Sriken, J., Johnsen, S. T., Smith, H., Sherman, M. F., & Erford, B. T. (2022). Testing the factorial validity and measurement invariance of college student scores on the Generalized Anxiety Disorder (GAD-7) Scale across gender and race. Measurement and Evaluation in Counseling and Development, 55(1), 1–16.
https://doi.org/10.1080/07481756.2021.1902239

Stamm, B. H. (2010). The Concise ProQOL Manual (2nd ed.). bit.ly/StammProQOL

Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1–25. https://doi.org/10.1146/annurev.clinpsy.032408.153639

Swank, J. M., & Mullen, P. R. (2017). Evaluating evidence for conceptually related constructs using bivariate correlations. Measurement and Evaluation in Counseling and Development, 50(4), 270–274.
https://doi.org/10.1080/07481756.2017.1339562

Thorsen, S. V., Rugulies, R., Hjarsbech, P. U., & Bjorner, J. B. (2013). The predictive value of mental health for long-term sickness absence: The Major Depression Inventory (MDI) and the Mental Health Inventory (MHI-5) compared. BMC Medical Research Methodology, 13(1), Article 115. https://doi.org/10.1186/1471-2288-13-115

Vijayan, P., & Joseph, M. I. (2015). Wellness and social interaction anxiety among adolescents. Indian Journal of Health and Wellbeing, 6(6), 637–639.

Wester, K. L., Borders, L. D., Boul, S., & Horton, E. (2013). Research quality: Critique of quantitative articles in the Journal of Counseling & Development. Journal of Counseling & Development, 91(3), 280–290.
https://doi.org/10.1002/j.1556-6676.2013.00096.x

Appendix
Outline and Brief Overview of a Quantitative Methods Section

Methods

  • Research design (e.g., group comparison [experimental, quasi-experimental, ex-post-facto], correlational/predictive) and conceptual framework
  • Researcher bias and reflexivity statement

Participants and Procedures

  • Recruitment procedures for data collection in enough detail for replication
  • Research ethics including but not limited to receiving institutional review board (IRB) approval
  • Sampling procedure: Researcher access to prospective participants, recruitment procedures, and data collection modality (e.g., online survey)
  • Sampling technique: Probability sampling (e.g., simple random sampling, systematic random sampling, stratified random sampling, cluster sampling) or non-probability sampling (e.g., volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, matched sampling)
  • A priori statistical power analysis
  • Sampling frame, response rate, raw sample, missing data, and the size of the final useable sample
  • Demographic breakdown for participants
  • Timeframe, setting, and location where data were collected

Measures

  • Introduction of the instrument and construct(s) of measurement (include sample test items)
  • Reliability and validity evidence of test scores (for each instrument):
    • Existing reliability (e.g., internal consistency [coefficient alpha, coefficient omega, or coefficient H], test/retest) and validity (e.g., internal structure, convergent/divergent, criterion) evidence of scores
      • *Note: At a minimum, internal structure validity evidence of scores should include both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).
    • Reliability and validity evidence of test scores with the data set in the present study
      • *Note: Only using coefficient alpha without completing statistical assumption checking is insufficient. Compute both coefficient omega and alpha or alpha with proper assumption checking.
    • Cross-cultural fairness and norming: Commentary on how and in what ways cross-cultural fairness guided the selection, administration, and interpretation of procedures and test results
      • Review and citations of original psychometric studies and normative samples

Data Analysis

  • Operationalized variables and scales of measurement
  • Procedures for matching variables with appropriate statistical analyses
  • Assumption checking procedures

Note. This appendix is a brief summary and not a substitute for the narrative in the text of this article.

 

Michael T. Kalkbrenner, PhD, NCC, is an associate professor at New Mexico State University. Correspondence may be addressed to Michael T. Kalkbrenner, 1780 E. University Ave., Las Cruces, NM 88003, mkalk001@nmsu.edu.