Nov 9, 2022 | Volume 12 - Issue 3
Michael T. Kalkbrenner
Conducting and publishing rigorous empirical research based on original data is essential for advancing and sustaining high-quality counseling practice. The purpose of this article is to provide a one-stop-shop for writing a rigorous quantitative Methods section in counseling and related fields. The importance of judiciously planning, implementing, and writing quantitative research methods cannot be understated, as methodological flaws can completely undermine the integrity of the results. This article includes an overview, considerations, guidelines, best practices, and recommendations for conducting and writing quantitative research designs. The author concludes with an exemplar Methods section to provide a sample of one way to apply the guidelines for writing or evaluating quantitative research methods that are detailed in this manuscript.
Keywords: empirical, quantitative, methods, counseling, writing
The findings of rigorous empirical research based on original data is crucial for promoting and maintaining high-quality counseling practice (American Counseling Association [ACA], 2014; Giordano et al., 2021; Lutz & Hill, 2009; Wester et al., 2013). Peer-reviewed publication outlets play a crucial role in ensuring the rigor of counseling research and distributing the findings to counseling practitioners. The four major sections of an original empirical study usually include: (a) Introduction/Literature Review, (b) Methods, (c) Results, and (d) Discussion (American Psychological Association [APA], 2020; Heppner et al., 2016). Although every section of a research study must be carefully planned, executed, and reported (Giordano et al., 2021), scholars have engaged in commentary about the importance of a rigorous and clearly written Methods section for decades (Korn & Bram, 1988; Lutz & Hill, 2009). The Methods section is the “conceptual epicenter of a manuscript” (Smagorinsky, 2008, p. 390) and should include clear and specific details about how the study was conducted (Heppner et al., 2016). It is essential that producers and consumers of research are aware of key methodological standards, as the quality of quantitative methods in published research can vary notably, which has serious implications for the merit of research findings (Lutz & Hill, 2009; Wester et al., 2013).
Careful planning prior to launching data collection is especially important for conducting and writing a rigorous quantitative Methods section, as it is rarely appropriate to alter quantitative methods after data collection is complete for both practical and ethical reasons (ACA, 2014; Creswell & Creswell, 2018). A well-written Methods section is also crucial for publishing research in a peer-reviewed journal; any serious methodological flaws tend to automatically trigger a decision of rejection without revisions. Accordingly, the purpose of this article is to provide both producers and consumers of quantitative research with guidelines and recommendations for writing or evaluating the rigor of a Methods section in counseling and related fields. Specifically, this manuscript includes a general overview of major quantitative methodological subsections as well as an exemplar Methods section. The recommended subsections and guidelines for writing a rigorous Methods section in this manuscript (see Appendix) are based on a synthesis of (a) the extant literature (e.g., Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Giordano et al., 2021); (b) the Standards for Educational and Psychological Testing (American Educational Research Association [AERA] et al., 2014), (c) the ACA Code of Ethics (ACA, 2014), and (d) the Journal Article Reporting Standards (JARS) in the APA 7 (2020) manual.
Quantitative Methods: An Overview of the Major Sections
The Methods section is typically the second major section in a research manuscript and can begin with an overview of the theoretical framework and research paradigm that ground the study (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). Research paradigms and theoretical frameworks are more commonly reported in qualitative, conceptual, and dissertation studies than in quantitative studies. However, research paradigms and theoretical frameworks can be very applicable to quantitative research designs (see the exemplar Methods section below). Readers are encouraged to consult Creswell and Creswell (2018) for a clear and concise overview about the utility of a theoretical framework and a research paradigm in quantitative research.
Research Design
The research design should be clearly specified at the beginning of the Methods section. Commonly employed quantitative research designs in counseling include but are not limited to group comparisons (e.g., experimental, quasi-experimental, ex-post-facto), correlational/predictive, meta-analysis, descriptive, and single-subject designs (Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Leedy & Ormrod, 2019). A well-written literature review and strong research question(s) will dictate the most appropriate research design. Readers can refer to Flinn and Kalkbrenner (2021) for free (open access) commentary on and examples of conducting a literature review, formulating research questions, and selecting the most appropriate corresponding research design.
Researcher Bias and Reflexivity
Counseling researchers have an ethical responsibility to minimize their personal biases throughout the research process (ACA, 2014). A researcher’s personal beliefs, values, expectations, and attitudes create a lens or framework for how data will be collected and interpreted. Researcher reflexivity or positionality statements are well-established methodological standards in qualitative research (Hays & Singh, 2012; Heppner et al., 2016; Rovai et al., 2013). Researcher bias is rarely reported in quantitative research; however, researcher bias can be just as inherently present in quantitative as it is in qualitative studies. Being reflexive and transparent about one’s biases strengthens the rigor of the research design (Creswell & Creswell, 2018; Onwuegbuzie & Leech, 2005). Accordingly, quantitative researchers should consider reflecting on their biases in similar ways as qualitative researchers (Onwuegbuzie & Leech, 2005). For example, a researcher’s topical and methodological choices are, at least in part, based on their personal interests and experiences. To this end, quantitative researchers are encouraged to reflect on and consider reporting their beliefs, assumptions, and expectations throughout the research process.
Participants and Procedures
The major aim in the Participants and Procedures subsection of the Methods section is to provide a clear description of the study’s participants and procedures in enough detail for replication (ACA, 2014; APA, 2020; Giordano et al., 2021; Heppner et al., 2016). When working with human subjects, authors should briefly discuss research ethics including but not limited to receiving institutional review board (IRB) approval (Giordano et al., 2021; Korn & Bram, 1988). Additional considerations for the Participants and Procedures section include details about the authors’ sampling procedure, inclusion and/or exclusion criteria for participation, sample size, participant background information, location/site, and protocol for interventions (APA, 2020).
Sampling Procedure and Sample Size
Sampling procedures should be clearly stated in the Methods section. At a minimum, the description of the sampling procedure should include researcher access to prospective participants, recruitment procedures, data collection modality (e.g., online survey), and sample size considerations. Quantitative sampling approaches tend to be clustered into either probability or non-probability techniques (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). The key distinguishing feature of probability sampling is random selection, in which all prospective participants in the population have an equal chance of being randomly selected to participate in the study (Leedy & Ormrod, 2019). Examples of probability sampling techniques include simple random sampling, systematic random sampling, stratified random sampling, or cluster sampling (Leedy & Ormrod, 2019).
Non-probability sampling techniques lack random selection and there is no way of determining if every member of the population had a chance of being selected to participate in the study (Leedy & Ormrod, 2019). Examples of non-probability sampling procedures include volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, and matched sampling. In quantitative research, probability sampling procedures are more rigorous in terms of generalizability (i.e., the extent to which research findings based on sample data extend or generalize to the larger population from which the sample was drawn). However, probability sampling is not always possible and non-probability sampling procedures are rigorous in their own right. Readers are encouraged to review Leedy and Ormrod’s (2019) commentary on probability and non-probability sampling procedures. Ultimately, the selection of a sampling technique should be made based on the population parameters, available resources, and the purpose and goals of the study.
A Priori Statistical Power Analysis. It is essential that quantitative researchers determine the minimum necessary sample size for computing statistical analyses before launching data collection (Balkin & Sheperis, 2011; Sink & Mvududu, 2010). An insufficient sample size substantially increases the probability of committing a Type II error, which occurs when the results of statistical testing reveal non–statistically significant findings when in reality (of which the researcher is unaware), significant findings do exist. Computing an a priori (computed before starting data collection) statistical power analysis reduces the chances of a Type II error by determining the smallest sample size that is necessary for finding statistical significance, if statistical significance exists (Balkin & Sheperis, 2011). Readers can consult Balkin and Sheperis (2011) as well as Sink and Mvududu (2010) for an overview of statistical significance, effect size, and statistical power. A number of statistical power analysis programs are available to researchers. For example, G*Power (Faul et al., 2009) is a free software program for computing a priori statistical power analyses.
Sampling Frame and Location
Counselors should report their sampling frame (total number of potential participants), response rate, raw sample (total number of participants that engaged with the study at any level, including missing and incomplete data), and the size of the final useable sample. It is also important to report the breakdown of the sample by demographic and other important participant background characteristics, for example, “XX.X% (n = XXX) of participants were first-generation college students, XX.X% (n = XXX) were second-generation . . .” The selection of demographic variables as well as inclusion and exclusion criteria should be justified in the literature review. Readers are encouraged to consult Creswell and Creswell (2018) for commentary on writing a strong literature review.
The timeframe, setting, and location during which data were collected are important methodological considerations (APA, 2020). Specific names of institutions and agencies should be masked to protect their privacy and confidentiality; however, authors can give descriptions of the setting and location (e.g., “Data were collected between April 2021 to February 2022 from clients seeking treatment for addictive disorders at an outpatient, integrated behavioral health care clinic located in the Northeastern United States.”). Authors should also report details about any interventions, curriculum, qualifications and background information for research assistants, experimental design protocol(s), and any other procedural design issues that would be necessary for replication. In instances in which describing a treatment or conditions becomes exorbitant (e.g., step-by-step manualized therapy, programs, or interventions), researchers can include footnotes, appendices, and/or references to refer the reader to more information about the intervention protocol.
Missing Data
Procedures for handling missing values (incomplete survey responses) are important considerations in quantitative data analysis. Perhaps the most straightforward option for handling missing data is to simply delete missing responses. However, depending on the percentage of data that are missing and how the data are missing (e.g., missing completely at random, missing at random, or not missing at random), data imputation techniques can be employed to recover missing values (Cook, 2021; Myers, 2011). Quantitative researchers should provide a clear rationale behind their decisions around the deletion of missing values or when using a data imputation method. Readers are encouraged to review Cook’s (2021) commentary on procedures for handling missing data in quantitative research.
Measures
Counseling and other social science researchers oftentimes use instruments and screening tools to appraise latent traits, which can be defined as variables that are inferred rather than observed (AERA et al., 2014). The purpose of the Measures (aka Instrumentation) section is to operationalize the construct(s) of measurement (Heppner et al., 2016). Specifically, the Measures subsection of the Methods in a quantitative manuscript tends to include a presentation of (a) the instrument and construct(s) of measurement, (b) reliability and validity evidence of test scores, and (c) cross-cultural fairness and norming. The Measures section might also include a Materials subsection for studies that employed data-gathering techniques or equipment besides or in addition to instruments (Heppner et al., 2016); for instance, if a research study involved the use of a biofeedback device to collect data on changes in participants’ body functions.
Instrument and Construct of Measurement
Begin the Measures section by introducing the questionnaire or screening tool, its construct(s) of measurement, number of test items, example test items, and scale points. If applicable, the Measures section can also include information on scoring procedures and cutoff criterion; for example, total score benchmarks for low, medium, and high levels of the trait. Authors might also include commentary about how test scores will be operationalized to constitute the variables in the upcoming Data Analysis section.
Reliability and Validity Evidence of Test Scores
Reliability evidence involves the degree to which test scores are stable or consistent and validity evidence refers to the extent to which scores on a test succeed in measuring what the test was designed to measure (AERA et al., 2014; Bardhoshi & Erford, 2017). Researchers should report both reliability and validity evidence of scores for each instrument they use (Wester et al., 2013). A number of forms of reliability evidence exist (e.g., internal consistency, test-retest, interrater, and alternate/parallel/equivalent forms) and the AERA standards (2014) outline five forms of validity evidence. For the purposes of this article, I will focus on internal consistency reliability, as it is the most popular and most commonly misused reliability estimate in social sciences research (Kalkbrenner, 2021a; McNeish, 2018), as well as construct validity. The psychometric properties of a test (including reliability and validity evidence) are contingent upon the scores from which they were derived. As such, no test is inherently valid or reliable; test scores are only reliable and valid for a certain purpose, at a particular time, for use with a specific sample. Accordingly, authors should discuss reliability and validity evidence in terms of scores, for example, “Stamm (2010) found reliability and validity evidence of scores on the Professional Quality of Life (ProQOL 5) with a sample of . . . ”
Internal Consistency Reliability Evidence. Internal consistency estimates are derived from associations between the test items based on one administration (Kalkbrenner, 2021a). Cronbach’s coefficient alpha (α) is indisputably the most popular internal consistency reliability estimate in counseling and throughout social sciences research in general (Kalkbrenner, 2021a; McNeish, 2018). The appropriate use of coefficient alpha is reliant on the data meeting the following statistical assumptions: (a) essential tau equivalence, (b) continuous level scale of measurement, (c) normally distributed data, (d) uncorrelated error, (e) unidimensional scale, and (f) unit-weighted scaling (Kalkbrenner, 2021a). For decades, coefficient alpha has been passed down in the instructional practice of counselor training programs. Coefficient alpha has appeared as the dominant reliability index in national counseling and psychology journals without most authors computing and reporting the necessary statistical assumption checking (Kalkbrenner, 2021a; McNeish, 2018). The psychometrically daunting practice of using alpha without assumption checking poses a threat to the veracity of counseling research, as the accuracy of coefficient alpha is threatened if the data violate one or more of the required assumptions.
Internal Consistency Reliability Indices and Their Appropriate Use. Composite reliability (CR)
internal consistency estimates are derived in similar ways as coefficient alpha; however, the proper computation of CRs is not reliant on the data meeting many of alpha’s statistical assumptions (Kalkbrenner, 2021a; McNeish, 2018). For example, McDonald’s coefficient omega (ω or ωt) is a CR estimate that is not dependent on the data meeting most of alpha’s assumptions (Kalkbrenner, 2021a). In addition, omega hierarchical (ωh) and coefficient H are CR estimates that can be more advantageous than alpha. Despite the utility of CRs, their underuse in research practice is historically, in part, because of the complex nature of computation. However, recent versions of SPSS include a breakthrough point-and-click feature for computing coefficient omega as easily as coefficient alpha. Readers can refer to the SPSS user guide for steps to compute omega.
Guidelines for Reporting Internal Consistency Reliability. In the Measures subsection of the Methods section, researchers should report existing reliability evidence of scores for their instruments. This can be done briefly by reporting the results of multiple studies in the same sentence, as in: “A number of past investigators found internal consistency reliability evidence for scores on the [name of test] with a number of different samples, including college students (α =. XX, ω =. XX; Authors et al., 20XX), clients living with chronic back pain (α =. XX, ω =. XX; Authors et al., 20XX), and adults in the United States (α = . XX, ω =. XX; Authors et al., 20XX) . . .”
Researchers should also compute and report reliability estimates of test scores with their data set in the Measures section. If a researcher is using coefficient alpha, they have a duty to complete and report assumption checking to demonstrate that the properties of their sample data were suitable for alpha (Kalkbrenner, 2021a; McNeish, 2018). Another option is to compute a CR (e.g., ω or H) instead of alpha. However, Kalkbrenner (2021a) recommended that researchers report both coefficient alpha (because of its popularity) and coefficient omega (because of the robustness of the estimate). The proper interpretation of reliability estimates of test scores is done on a case-by-case basis, as the meaning of reliability coefficients is contingent upon the construct of measurement and the stakes or consequences of the results for test takers (Kalkbrenner, 2021a). The following tentative interpretative guidelines for adults’ scores on attitudinal measures were offered by Kalkbrenner (2021b) for coefficient alpha: α < .70 = poor, α > .70 to .84 = acceptable, α > .85 = strong; and for coefficient omega: ω < .65 = poor, ω > .65 to .80 = acceptable, ω > .80 = strong. It is important to note that these thresholds are for adults’ scores on attitudinal measures; acceptable internal consistency reliability estimates of scores should be much stronger for high-stakes testing.
Construct Validity Evidence of Test Scores. Construct validity involves the test’s ability to accurately capture a theoretical or latent construct (AERA et al., 2014). Construct validity considerations are particularly important for counseling researchers who tend to investigate latent traits as outcome variables. At a minimum, counseling researchers should report construct validity evidence for both internal structure and relations with theoretically relevant constructs. Internal structure (aka factorial validity) is a source of construct validity that represents the degree to which “the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” (AERA et al., 2014, p. 16). Readers can refer to Kalkbrenner (2021b) for a free (open access publishing) overview of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) that is written in layperson’s terms. Relations with theoretically relevant constructs (e.g., convergent and divergent validity) are another source of construct validity evidence that involves comparing scores on the test in question with scores on other reputable tests (AERA et al., 2014; Strauss & Smith, 2009).
Guidelines for Reporting Validity Evidence. Counseling researchers should report existing evidence of at least internal structure and relations with theoretically relevant constructs (e.g., convergent or divergent validity) for each instrument they use. EFA results alone are inadequate for demonstrating internal structure validity evidence of scores, as EFA is a much less rigorous test of internal structure than CFA (Kalkbrenner, 2021b). In addition, EFA results can reveal multiple retainable factor solutions, which need to be tested/confirmed via CFA before even initial internal structure validity evidence of scores can be established. Thus, both EFA and CFA are necessary for reporting/demonstrating initial evidence of internal structure of test scores. In an extension of internal structure, counselors should also report existing convergent and/or divergent validity of scores. High correlations (r > .50) demonstrate evidence of convergent validity and moderate-to-low correlations (r < .30, preferably r < .10) support divergent validity evidence of scores (Sink & Stroh, 2006; Swank & Mullen, 2017).
In an ideal situation, a researcher will have the resources to test and report the internal structure (e.g., compute CFA firsthand) of scores on the instrumentation with their sample. However, CFA requires large sample sizes (Kalkbrenner, 2021b), which oftentimes is not feasible. It might be more practical for researchers to test and report relations with theoretically relevant constructs, though adding one or more questionnaire(s) to data collection efforts can come with the cost of increasing respondent fatigue. In these instances, researchers might consider reporting other forms of validity evidence (e.g., evidence based on test content, criterion validity, or response processes; AERA et al., 2014). In instances when computing firsthand validity evidence of scores is not logistically viable, researchers should be transparent about this limitation and pay especially careful attention to presenting evidence for cross-cultural fairness and norming.
Cross-Cultural Fairness and Norming
In a psychometric context, fairness (sometimes referred to as cross-cultural fairness) is a fundamental validity issue and a complex construct to define (AERA et al., 2014; Kane, 2010; Neukrug & Fawcett, 2015). I offer the following composite definition of cross-cultural fairness for the purposes of a quantitative Measures section: the degree to which test construction, administration procedures, interpretations, and uses of results are equitable and represent an accurate depiction of a diverse group of test takers’ abilities, achievement, attitudes, perceptions, values, and/or experiences (AERA et al., 2014; Educational Testing Service [ETS], 2016; Kane, 2010; Kane & Bridgeman, 2017). Counseling researchers should consider the following central fairness issues when selecting or developing instrumentation: measurement bias, accessibility, universal design, equivalent meaning (invariance), test content, opportunity to learn, test adaptations, and comparability (AERA et al., 2014; Kane & Bridgeman, 2017). Providing a comprehensive overview of fairness is beyond the scope of this article; however, readers are encouraged to read Chapter 3 in the AERA standards (2014) on Fairness in Testing.
In the Measures section, counseling researchers should include commentary on how and in what ways cross-cultural fairness guided their selection, administration, and interpretation of procedures and test results (AERA et al., 2014; Kalkbrenner, 2021b). Cross-cultural fairness and construct validity are related constructs (AERA et al., 2014). Accordingly, citing construct validity of test scores (see the previous section) with normative samples similar to the researcher’s target population is one way to provide evidence of cross-cultural fairness. However, construct validity evidence alone might not be a sufficient indication of cross-cultural fairness, as the latent meaning of test scores are a function of test takers’ cultural context (Kalkbrenner, 2021b). To this end, when selecting instrumentation, researchers should review original psychometric studies and consider the normative sample(s) from which test scores were derived.
Commentary on the Danger of Using Self-Developed and Untested Scales
Counseling researchers have an ethical duty to “carefully consider the validity, reliability, psychometric limitations, and appropriateness of instruments when selecting assessments” (ACA, 2014, p. 11). Quantitative researchers might encounter instances in which a scale is not available to measure their desired construct of measurement (latent/inferred variable). In these cases, the first step in the line of research is oftentimes to conduct an instrument development and score validation study (AERA et al., 2014; Kalkbrenner, 2021b). Detailing the protocol for conducting psychometric research is outside the scope of this article; however, readers can refer to the MEASURE Approach to Instrument Development (Kalkbrenner, 2021c) for a free (open access publishing) overview of the steps in an instrument development and score validation study. Adapting an existing scale can be option in lieu of instrument development; however, according to the AERA standards (2014), “an index that is constructed by manipulating and combining test scores should be subjected to the same validity, reliability, and fairness investigations that are expected for the test scores that underlie the index” (p. 210). Although it is not necessary that all quantitative researchers become psychometricians and conduct full-fledged psychometric studies to validate scores on instrumentation, researchers do have a responsibility to report evidence of the reliability, validity, and cross-cultural fairness of test scores for each instrument they used. Without at least initial construct validity testing of scores (calibration), researchers cannot determine what, if anything at all, an untested instrument actually measures.
Data Analysis
Counseling researchers should report and explain the selection of their data analytic procedures (e.g., statistical analyses) in a Data Analysis (or Statistical Analysis) subsection of the Methods or Results section (Giordano et al., 2021; Leedy & Ormrod, 2019). The placement of the Data Analysis section in either the Methods or Results section can vary between publication outlets; however, this section tends to include commentary on variables, statistical models and analyses, and statistical assumption checking procedures.
Operationalizing Variables and Corresponding Statistical Analyses
Clearly outlining each variable is an important first step in selecting the most appropriate statistical analysis for answering each research question (Creswell & Creswell, 2018). Researchers should specify the independent variable(s) and corresponding levels as well as the dependent variable(s); for example, “The first independent variable, time, was composed of the three following levels: pre, middle, and post. The dependent variables were participants’ scores on the burnout and compassion satisfaction subscales of the ProQOL 5.” After articulating the variables, counseling researchers are tasked with identifying each variable’s scale of measurement (Creswell & Creswell, 2018; Field, 2018; Flinn & Kalkbrenner, 2021). Researchers can select the most appropriate statistical test(s) for answering their research question(s) based on the scale of measurement for each variable and referring to Table 8.3 on page 159 in Creswell and Creswell (2018), Figure 1 in Flinn and Kalkbrenner (2021), or the chart on page 1072 in Field (2018).
Assumption Checking
Statistical analyses used in quantitative research are derived based on a set of underlying assumptions (Field, 2018; Giordano et al., 2021). Accordingly, it is essential that quantitative researchers outline their protocol for testing their sample data for the appropriate statistical assumptions. Assumptions of common statistical tests in counseling research include normality, absence of outliers (multivariate and/or univariate), homogeneity of covariance, homogeneity of regression slopes, homoscedasticity, independence, linearity, and absence of multicollinearity (Flinn & Kalkbrenner, 2021; Giordano et al., 2021). Readers can refer to Figure 2 in Flinn and Kalkbrenner (2021) for an overview of statistical assumptions for the major statistical analyses in counseling research.
Exemplar Quantitative Methods Section
The following section includes an exemplar quantitative methods section based on a hypothetical example and a practice data set. Producers and consumers of quantitative research can refer to the following section as an example for writing their own Methods section or for evaluating the rigor of an existing Methods section. As stated previously, a well-written literature review and research question(s) are essential for grounding the study and Methods section (Flinn & Kalkbrenner, 2021). The final piece of a literature review section is typically the research question(s). Accordingly, the following research question guided the following exemplar Methods section: To what extent are there differences in anxiety severity between college students who participate in deep breathing exercises with progressive muscle relaxation, group exercise program, or both group exercise and deep breathing with progressive muscle relaxation?
——-Exemplar——-
Methods
A quantitative group comparison research design was employed based on a post-positivist philosophy of science (Creswell & Creswell, 2018). Specifically, I implemented a quasi-experimental, control group pretest/posttest design to answer the research question (Leedy & Ormrod, 2019). Consistent with a post-positivist philosophy of science, I reflected on pursuing a probabilistic objective answer that is situated within the context of imperfect and fallible evidence. The rationale for the present study was grounded in Dr. David Servan-Schreiber’s (2009) theory of lifestyle practices for integrated mental and physical health. According to Servan-Schreiber, simultaneously focusing on improving one’s mental and physical health is more effective than focusing on either physical health or mental wellness in isolation. Consistent with Servan-Schreiber’s theory, the aim of the present study was to compare the utility of three different approaches for anxiety reduction: a behavioral approach alone, a physiological approach alone, and a combined behavioral approach and physiological approach.
I am in my late 30s and identify as a White man. I have a PhD in counselor education as well as an MS in clinical mental health counseling. I have a deep belief in and an active line of research on the utility of total wellness (combined mental and physical health). My research and clinical experience have informed my passion and interest in studying the utility of integrated physical and psychological health services. More specifically, my personal beliefs, values, and interest in total wellness influenced my decision to conduct the present study. I carefully followed the procedures outlined below to reduce the chances that my personal values biased the research design.
Participants and Procedures
Data collection began following approval from the IRB. Data were collected during the fall 2022 semester from undergraduate students who were at least 18 years or older and enrolled in at least one class at a land grant, research-intensive university located in the Southwestern United States. An a priori statistical power analysis was computed using G*Power (Faul et al., 2009). Results revealed that a sample size of at least 42 would provide an 80% power estimate, α = .05, with a moderate effect size, f = 0.25.
I obtained an email list from the registrar’s office of all students enrolled in a section of a Career Excellence course, which was selected to recruit students in a variety of academic majors because all undergraduate students in the College of Education are required to take this course. The focus of this study (mental and physical wellness) was also consistent with the purpose of the course (success in college). A non-probability, convenience sampling procedure was employed by sending a recruitment message to students’ email addresses via the Qualtrics online survey platform. The response rate was approximately 15%, with a total of 222 prospective participants indicating their interest in the study by clicking on the electronic recruitment link, which automatically sent them an invitation to attend an information session about the study. One hundred forty-four students showed up for the information session, 129 of which provided their voluntary informed consent to enroll in the study. Participants were given a confidential identification number to track their pretest/posttest responses, and then they completed the pretest (see the Measures section below). Respondents were randomly assigned in equal groups to either (a) deep breathing with progressive muscle relaxation condition, (b) group exercise condition, or (c) both exercise and deep breathing with progressive muscle relaxation condition.
A missing values analysis showed that less than 5% of data was missing for all cases. Expectation maximization was used to impute missing values, as Little’s Missing Completely at Random (MCAR) test revealed that the data could be treated as MCAR (p = .367). Data from five participants who did not return to complete the posttest at the end of the semester were removed, yielding a robust sample of N = 124. Participants (N = 124) ranged in age from 18 to 33 (M = 21.64, SD = 3.70). In terms of gender identity, 65.0% (n = 80) self-identified as female, 32.2% (n = 40) as male, 0.8% (n = 1) as transgender, and 2.4% (n = 3) did not specify their gender identity. For ethnic identity, 50.0% (n = 62) identified as White, 26.7% (n = 33) as Latinx, 12.1% (n = 15) as Asian, 9.6% (n = 12) as Black, 0.8% (n = 1) as Alaskan Native, and 0.8% (n = 1) did not specify their ethnic identity. In terms of generational status, 36.3% (n = 45) of participants were first-generation college students and 63.7% (n = 79) were second-generation or beyond.
Group Exercise and Deep Breathing Programs
I was awarded a small grant to offer on-campus deep breathing with progressive muscle relaxation and group exercise programs. The structure of the group exercise program was based on Patterson et al. (2021), which consisted of more than 50 available exercise classes each week (e.g., cycling, yoga, swimming, dance). There was no limit to the number of classes that participants could attend; however, attending at least one class each week was required for participation in the study. Readers can refer to Patterson et al. for more information about the group exercise programming.
Neeru et al.’s (2015) deep breathing and progressive muscle relaxation programming was used in the present study. Participants completed daily deep breathing and Jacobson Progressive Muscle Relaxation (JPMR). JPMR was selected because of its documented success with treating anxiety disorders (Neeru et al., 2015). Specifically, the program consisted of four deep breathing steps completed five times and JPMR for approximately 25 minutes daily. Participants attended a weekly deep breathing and JPMR session facilitated by a licensed professional counselor. Participants also practiced deep breathing and JPMR on their own daily and kept a log to document their practice sessions. Readers can refer to Neeru et al. for more information about JPMR and the deep breathing exercises.
Measures
Prospective participants read an informed consent statement and indicated their voluntary informed consent by clicking on a checkbox. Next, participants confirmed that they met the following inclusion criteria: (a) at least 18 years old and (b) currently enrolled in at least one undergraduate college class. The instrumentation began with demographic items regarding participants’ gender identity, ethnic identity, age, and confidential identification number to track their pretest and posttest scores. Lastly, participants completed a convergent validity measure (Mental Health Inventory – 5) and the Generalized Anxiety Disorder (GAD)-7 to measure the outcome variable (anxiety severity).
Reliability and Validity Evidence of Test Scores
Tests of internal consistency were computed to test the reliability of scores on the screening tool for appraising anxiety severity with undergraduate students in the present sample. For internal consistency reliability of scores, coefficient alpha (α) and coefficient omega (ω) were computed with the following minimum thresholds for adults’ scores on attitudinal measures: α > .70 and ω > .65, based on the recommendations of Kalkbrenner (2021b).
The Mental Health Inventory–5. Participants completed the Mental Health Inventory (MHI)-5 to test the convergent validity of undergraduate students in the present samples’ scores on the GAD-7, which was used to measure the outcome variable in this study, anxiety severity. The MHI-5 is a 5-item measure for appraising overall mental health (Berwick et al., 1991). Higher MHI-5 scores reflect better mental health. Participants responded to test items (example: “How much of the time, during the past month, have you been a very nervous person?”) on the following Likert-type scale: 0 = none of the time, 1 = a little of the time, 2 = some of the time, 3 = a good bit of the time, 4 = most of the time, or 5 = all of the time. The MHI-5 has particular utility as a convergent validity measure because of its brief nature (5 items) coupled with the myriad of support for its psychometric properties (e.g., Berwick et al., 1991; Rivera-Riquelme et al., 2019; Thorsen et al., 2013). As just a few examples, Rivera-Riquelme et al. (2019) found acceptable internal consistency reliability evidence (α = .71, ω = .78) and internal structure validity evidence of MHI-5 scores. In addition, the findings of Thorsen et al. (2013) demonstrated convergent validity evidence of MHI-5 scores. Findings in the extant literature (e.g., Foster et al., 2016; Vijayan & Joseph, 2015) established an inverse relationship between anxiety and mental health. Thus, a strong negative correlation (r > −.50; Sink & Stroh, 2006) between the MHI-5 and GAD-7 would support convergent validity evidence of scores.
The Generalized Anxiety Disorder–7. The GAD-7 is a 7-item screening tool for appraising anxiety severity (Spitzer et al., 2006). Participants respond to test items based on the following prompt: “Over the last 2 weeks, how often have you been bothered by the following problems?” and anchor definitions: 0 = not at all, 1 = several days, 2 = more than half the days, or 3 = nearly every day (Spitzer et al., 2006, p. 1739). Sample test items include “being so restless that it’s hard to sit still” and “feeling afraid as if something awful might happen.” The GAD-7 items can be summed into an interval-level composite score, with higher scores indicating greater levels of Anxiety Severity. GAD-7 scores can range from 0 to 21 and are classified as mild (0–5), moderate (6–10), moderately severe (11–15), or severe (16–21).
In the initial score validation study, Spitzer et al. (2006) found evidence for internal consistency (α = .92) and test-retest reliability (intraclass correlation = .83) of GAD-7 scores among adults in the United States who were receiving services in primary care clinics. In more recent years, a number of additional investigators found internal consistency reliability evidence for GAD-7 scores, including samples of undergraduate college students in the southern United States (α = .91; Sriken et al., 2022), Black and Latinx adults in the United States (α = .93, ω = .93; Kalkbrenner, 2022), and English-speaking college students living in Ethiopia (ω = .77; Manzar et al., 2021). Similarly, the data set in the present study displayed acceptable internal consistency reliability evidence for GAD-7 scores (α = .82, ω = .81).
Spitzer et al. (2006) used factor analysis to establish internal structure validity, correlations with established screening tools for convergent validity, and criterion validity evidence by demonstrating the capacity of GAD-7 scores for detecting likely cases of generalized anxiety disorder. A number of subsequent investigators found internal structure validity evidence of GAD-7 scores via CFA and multiple-group CFA (Kalkbrenner, 2022; Sriken et al., 2022). In addition, the findings of Sriken et al. (2022) supported both the convergent and divergent validity of GAD-7 scores with other established tests. The data set in the present study (N = 124) was not large enough for internal structure validity testing. However, a strong negative correlation (r = −.78) between the GAD-7 and MHI-5 revealed convergent validity evidence of GAD-7 scores with the present sample of undergraduate students.
In terms of norming and cross-cultural fairness, there were qualitative differences between the normative GAD-7 sample in the original score validation study (adults in the United States receiving services in primary care clinics) and the non-clinical sample of young adult college students in the present study. However, the demographic profile of the present sample is consistent with Sriken et al. (2022), who validated GAD-7 scores with a large sample (N = 414) of undergraduate college students. For example, the demographic profile of the sample in the current study for gender identity closely resembled the composition of Sriken et al.’s sample, which included 66.7% women, 33.1% men, and 0.2% transgender individuals. In terms of ethnic identity, the demographic profile of the present sample was consistent with Sriken et al. for White and Black participants, although the present sample reflected a somewhat smaller proportion of Asian students (19.6%) and a greater proportion of Latinx students (5.3%).
Data Analysis and Assumption Checking
The present study included two categorical-level independent variables and one continuous-level dependent variable. The first independent variable, program, consisted of three levels: (a) deep breathing with progressive muscle relaxation, (b) group exercise, or (c) both exercise and deep breathing with progressive muscle relaxation. The second independent variable, time, consisted of two levels: the beginning of the semester and the end of the semester. The dependent variable was participants’ interval-level score on the GAD-7. Accordingly, a 3 (program) X 2 (time) mixed-design analysis of variance (ANOVA) was the most appropriate statistical test for answering the research question (Field, 2018).
The data were examined for the following statistical assumptions for a mixed-design ANOVA: absence of outliers, normality, homogeneity of variance, and sphericity of the covariance matrix based on the recommendations of Field (2018). Standardized z-scores revealed an absence of univariate outliers (z > 3.29). A review of skewness and kurtosis values were highly consistent with a normal distribution, with the majority of values less than ± 1.0. The results of a Levene’s test demonstrated that the data met the assumption of homogeneity of variance, F(2, 121) = 0.73, p = .486. Testing the data for sphericity was not applicable in this case, as the within-subjects IV (time) only comprised two levels.
——-End Exemplar——-
Conclusion
The current article is a primer on guidelines, best practices, and recommendations for writing or evaluating the rigor of the Methods section of quantitative studies. Although the major elements of the Methods section summarized in this manuscript tend to be similar across the national peer-reviewed counseling journals, differences can exist between journals based on the content of the article and the editorial board members’ preferences. Accordingly, it can be advantageous for prospective authors to review recently published manuscripts in their target journal(s) to look for any similarities in the structure of the Methods (and other sections). For instance, in one journal, participants and procedures might be reported in a single subsection, whereas in other journals they might be reported separately. In addition, most journals post a list of guidelines for prospective authors on their websites, which can include instructions for writing the Methods section. The Methods section might be the most important section in a quantitative study, as in all likelihood methodological flaws cannot be resolved once data collection is complete, and serious methodological flaws will compromise the integrity of the entire study, rendering it unpublishable. It is also essential that consumers of quantitative research can proficiently evaluate the quality of a Methods section, as poor methods can make the results meaningless. Accordingly, the significance of carefully planning, executing, and writing a quantitative research Methods section cannot be understated.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
American Counseling Association. (2014). ACA code of ethics.
American Psychological Association. (2020). Publication manual of the American Psychological Association: The official guide to APA style (7th ed.).
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). The standards for educational and psychological testing.
https://www.aera.net/Publications/Books/Standards-for-Educational-Psychological-Testing-2014-Edition
Balkin, R. S., & Sheperis, C. J. (2011). Evaluating and reporting statistical power in counseling research. Journal of Counseling & Development, 89(3), 268–272. https://doi.org/10.1002/j.1556-6678.2011.tb00088.x
Bardhoshi, G., & Erford, B. T. (2017). Processes and procedures for estimating score reliability and precision. Measurement and Evaluation in Counseling and Development, 50(4), 256–263.
https://doi.org/10.1080/07481756.2017.1388680
Berwick, D. M., Murphy, J. M., Goldman, P. A., Ware, J. E., Jr., Barsky, A. J., & Weinstein, M. C. (1991). Performance of a five-item mental health screening test. Medical Care, 29(2), 169–176.
https://doi.org/10.1097/00005650-199102000-00008
Cook, R. M. (2021). Addressing missing data in quantitative counseling research. Counseling Outcome Research and Evaluation, 12(1), 43–53. https://doi.org/10.1080/21501378.2019.171103
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.
Educational Testing Service. (2016). ETS international principles for fairness review of assessments: A manual for developing locally appropriate fairness review guidelines for various countries. https://www.ets.org/content/dam/ets-org/pdfs/about/fairness-review-international.pdf
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160.
https://doi.org/10.3758/BRM.41.4.1149
Field, A. (2018). Discovering statistics using IBM SPSS Statistics (5th ed.). SAGE.
Flinn, R. E., & Kalkbrenner, M. T. (2021). Matching variables with the appropriate statistical tests in counseling research. Teaching and Supervision in Counseling, 3(3), Article 4. https://doi.org/10.7290/tsc030304
Foster, T., Steen, L., O’Ryan, L., & Nelson, J. (2016). Examining how the Adlerian life tasks predict anxiety in first-year counseling students. The Journal of Individual Psychology, 72(2), 104–120. https://doi.org/10.1353/jip.2016.0009
Giordano, A. L., Schmit, M. K., & Schmit, E. L. (2021). Best practice guidelines for publishing rigorous research in counseling. Journal of Counseling & Development, 99(2), 123–133. https://doi.org/10.1002/jcad.12360
Hays, D. G., & Singh, A. A. (2012). Qualitative inquiry in clinical and educational settings. Guilford.
Heppner, P. P., Wampold, B. E., Owen, J., Wang, K. T., & Thompson, M. N. (2016). Research design in counseling (4th ed.). Cengage.
Kalkbrenner, M. T. (2021a). Alpha, omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation.
https://doi.org/10.1080/21501378.2021.1940118
Kalkbrenner, M. T. (2021b). Enhancing assessment literacy in professional counseling: A practical overview of factor analysis. The Professional Counselor, 11(3), 267–284. https://doi.org/10.15241/mtk.11.3.267
Kalkbrenner, M. T. (2021c). A practical guide to instrument development and score validation in the social sciences: The MEASURE Approach. Practical Assessment, Research & Evaluation, 26(1), Article 1.
https://doi.org/10.7275/svg4-e671
Kalkbrenner, M. T. (2022). Validation of scores on the Lifestyle Practices and Health Consciousness Inventory with Black and Latinx adults in the United States: A three-dimensional model. Measurement and Evaluation in Counseling and Development, 55(2), 84–97. https://doi.org/10.1080/07481756.2021.1955214
Kane, M. (2010). Validity and fairness. Language Testing, 27(2), 177–182. https://doi.org/10.1177/0265532209349467
Kane, M., & Bridgeman, B. (2017). Research on validity theory and practice at ETS. In R. E. Bennett & M. von Davier (Eds.), Advancing human assessment: The methodological, psychological and policy contributions of ETS (pp. 489–552). Springer. https://doi.org/10.1007/978-3-319-58689-2_16
Korn, J. H., & Bram, D. R. (1988). What is missing in the Method section of APA journal articles? American Psychologist, 43(12), 1091–1092. https://doi.org/10.1037/0003-066X.43.12.1091
Leedy, P. D., & Ormrod, J. E. (2019). Practical research: Planning and design (12th ed.). Pearson.
Lutz, W., & Hill, C. E. (2009). Quantitative and qualitative methods for psychotherapy research: Introduction to special section. Psychotherapy Research, 19(4–5), 369–373. https://doi.org/10.1080/10503300902948053
Manzar, M. D., Alghadir, A. H., Anwer, S., Alqahtani, M., Salahuddin, M., Addo, H. A., Jifar, W. W., & Alasmee, N. A. (2021). Psychometric properties of the General Anxiety Disorders-7 Scale using categorical data methods: A study in a sample of university attending Ethiopian young adults. Neuropsychiatric Disease and Treatment, 17(1), 893–903. https://doi.org/10.2147/NDT.S295912
McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412–433. https://doi.org/10.1037/met0000144
Myers, T. A. (2011). Goodbye, listwise deletion: Presenting hot deck imputation as an easy and effective tool for handling missing data. Communication Methods and Measures, 5(4), 297–310.
https://doi.org/10.1080/19312458.2011.624490
Neeru, Khakha, D. C., Satapathy, S., & Dey, A. B. (2015). Impact of Jacobson Progressive Muscle Relaxation (JPMR) and deep breathing exercises on anxiety, psychological distress and quality of sleep of hospitalized older adults. Journal of Psychosocial Research, 10(2), 211–223.
Neukrug, E. S., & Fawcett, R. C. (2015). Essentials of testing and assessment: A practical guide for counselors, social workers, and psychologists (3rd ed.). Cengage.
Onwuegbuzie, A. J., & Leech, N. L. (2005). On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. International Journal of Social Research Methodology, 8(5), 375–387. https://doi.org/10.1080/13645570500402447
Patterson, M. S., Gagnon, L. R., Vukelich, A., Brown, S. E., Nelon, J. L., & Prochnow, T. (2021). Social networks, group exercise, and anxiety among college students. Journal of American College Health, 69(4), 361–369. https://doi.org/10.1080/07448481.2019.1679150
Rivera-Riquelme, M., Piqueras, J. A., & Cuijpers, P. (2019). The Revised Mental Health Inventory-5 (MHI-5) as an ultra-brief screening measure of bidimensional mental health in children and adolescents. Psychiatry Research, 247(1), 247–253. https://doi.org/10.1016/j.psychres.2019.02.045
Rovai, A. P., Baker, J. D., & Ponton, M. K. (2013). Social science research design and statistics: A practitioner’s guide to research methods and SPSS analysis. Watertree Press.
Servan-Schreiber, D. (2009). Anticancer: A new way of life (3rd ed.). Viking Publishing.
Sink, C. A., & Mvududu, N. H. (2010). Statistical power, sampling, and effect sizes: Three keys to research relevancy. Counseling Outcome Research and Evaluation, 1(2), 1–18. https://doi.org/10.1177/2150137810373613
Sink, C. A., & Stroh, H. R. (2006). Practical significance: The use of effect sizes in school counseling research. Professional School Counseling, 9(5), 401–411. https://doi.org/10.1177/2156759X0500900406
Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication, 25(3), 389–411. https://doi.org/10.1177/0741088308317815
Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Löwe, B. (2006). A brief measure for assessing Generalized Anxiety Disorder: The GAD-7. Archives of Internal Medicine, 166(10), 1092–1097.
https://doi.org/10.1001/archinte.166.10.1092
Sriken, J., Johnsen, S. T., Smith, H., Sherman, M. F., & Erford, B. T. (2022). Testing the factorial validity and measurement invariance of college student scores on the Generalized Anxiety Disorder (GAD-7) Scale across gender and race. Measurement and Evaluation in Counseling and Development, 55(1), 1–16.
https://doi.org/10.1080/07481756.2021.1902239
Stamm, B. H. (2010). The Concise ProQOL Manual (2nd ed.). bit.ly/StammProQOL
Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1–25. https://doi.org/10.1146/annurev.clinpsy.032408.153639
Swank, J. M., & Mullen, P. R. (2017). Evaluating evidence for conceptually related constructs using bivariate correlations. Measurement and Evaluation in Counseling and Development, 50(4), 270–274.
https://doi.org/10.1080/07481756.2017.1339562
Thorsen, S. V., Rugulies, R., Hjarsbech, P. U., & Bjorner, J. B. (2013). The predictive value of mental health for long-term sickness absence: The Major Depression Inventory (MDI) and the Mental Health Inventory (MHI-5) compared. BMC Medical Research Methodology, 13(1), Article 115. https://doi.org/10.1186/1471-2288-13-115
Vijayan, P., & Joseph, M. I. (2015). Wellness and social interaction anxiety among adolescents. Indian Journal of Health and Wellbeing, 6(6), 637–639.
Wester, K. L., Borders, L. D., Boul, S., & Horton, E. (2013). Research quality: Critique of quantitative articles in the Journal of Counseling & Development. Journal of Counseling & Development, 91(3), 280–290.
https://doi.org/10.1002/j.1556-6676.2013.00096.x
Appendix
Outline and Brief Overview of a Quantitative Methods Section
Methods
- Research design (e.g., group comparison [experimental, quasi-experimental, ex-post-facto], correlational/predictive) and conceptual framework
- Researcher bias and reflexivity statement
Participants and Procedures
- Recruitment procedures for data collection in enough detail for replication
- Research ethics including but not limited to receiving institutional review board (IRB) approval
- Sampling procedure: Researcher access to prospective participants, recruitment procedures, and data collection modality (e.g., online survey)
- Sampling technique: Probability sampling (e.g., simple random sampling, systematic random sampling, stratified random sampling, cluster sampling) or non-probability sampling (e.g., volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, matched sampling)
- A priori statistical power analysis
- Sampling frame, response rate, raw sample, missing data, and the size of the final useable sample
- Demographic breakdown for participants
- Timeframe, setting, and location where data were collected
Measures
- Introduction of the instrument and construct(s) of measurement (include sample test items)
- Reliability and validity evidence of test scores (for each instrument):
- Existing reliability (e.g., internal consistency [coefficient alpha, coefficient omega, or coefficient H], test/retest) and validity (e.g., internal structure, convergent/divergent, criterion) evidence of scores
- *Note: At a minimum, internal structure validity evidence of scores should include both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).
- Reliability and validity evidence of test scores with the data set in the present study
- *Note: Only using coefficient alpha without completing statistical assumption checking is insufficient. Compute both coefficient omega and alpha or alpha with proper assumption checking.
- Cross-cultural fairness and norming: Commentary on how and in what ways cross-cultural fairness guided the selection, administration, and interpretation of procedures and test results
- Review and citations of original psychometric studies and normative samples
Data Analysis
- Operationalized variables and scales of measurement
- Procedures for matching variables with appropriate statistical analyses
- Assumption checking procedures
Note. This appendix is a brief summary and not a substitute for the narrative in the text of this article.
Michael T. Kalkbrenner, PhD, NCC, is an associate professor at New Mexico State University. Correspondence may be addressed to Michael T. Kalkbrenner, 1780 E. University Ave., Las Cruces, NM 88003, mkalk001@nmsu.edu.
Nov 9, 2022 | Volume 12 - Issue 3
Derron Hilts, Yanhong Liu, Melissa Luke
The authors examined whether school counselors’ emotional intelligence predicted their comprehensive school counseling program (CSCP) implementation and whether engagement in transformational leadership practices mediated the relationship between emotional intelligence and CSCP implementation. The sample for the study consisted of 792 school counselors nationwide. The findings demonstrated the significant mediating role of transformational leadership on the relationship between emotional intelligence and CSCP implementation. Implications for the counseling profession are discussed.
Keywords: emotional intelligence, school counselors, transformational leadership, comprehensive school counseling program, implementation
School counselors have been called upon to design and implement culturally responsive comprehensive school counseling programs (CSCPs) that have a deliberate and systemic focus on facilitating optimal student outcomes and development (American School Counselor Association [ASCA], 2017, 2019b). To this end, school counselors are expected to align their activities with the ASCA National Model (ASCA, 2019b) with an aim toward facilitating students’ knowledge, attitudes, skills, and behaviors to be academically and socially/emotionally successful and preparing students for college and career (ASCA, 2021). Relatedly, ASCA (2019a) urges school counselors to apply and enact a model of leadership in the process of program implementation. Several studies (e.g., Mason, 2010; Mullen et al., 2019; Shillingford & Lambie, 2010) have provided empirical evidence that supports the predictive role of school counselors’ leadership on their program implementation outcomes. Still, little is known about the relationship between school counselors’ program implementation and their leadership practices grounded in a specific model such as transformational leadership (Bolman & Deal, 1997; Kouzes & Posner, 1995). Understanding this relationship may allow school counselors to better align their practices within a specific leadership framework consistent with best practice (ASCA, 2019a).
Although leadership has been broadly established as a macro-level capability, emotional intelligence has started to gain interest in recent literature, as intra- and interpersonal competencies are central to school counselors’ practice (Hilts et al., 2019; Hilts, Liu, et al., 2022; Mullen et al., 2018). For instance, school counselors must be emotionally attuned to themselves and others to more effectively navigate the complexities of systems in which they operate (Mullen et al., 2018). One way to achieve such emotional attunement may be by respecting and validating others’ perspectives and providing emotional support to enact interpersonal influence aimed at facilitating educational partners’ keenness toward programmatic efforts (Hilts et al., 2019; Hilts, Liu, et al., 2022; Jordan & Lawrence, 2009). The purpose of the current study is to examine the mechanisms between school counselors’ emotional intelligence, transformational leadership, and CSCP implementation.
Comprehensive School Counseling Programs
Although school counseling programs will vary in structure based on the unique needs of school and community partners (Mason, 2010), programs should be comprehensive in scope, preventative by design, and developmental in nature (ASCA, 2017). CSCP implementation, which comprises a core component of school counseling practice, involves multilevel services (e.g., instruction, consultation, collaboration) and assessments (e.g., program assessments, annual results reports). The functioning of these services and assessments is further defined and managed within the broader school community by the CSCP (Duquette, 2021). Moreover, CSCPs are generally aligned with the ASCA National Model (ASCA, 2019b) to create a shared vision among school counselors to have a more deliberate and systemic focus on facilitating optimal student outcomes and development.
Over the past 20 years, researchers have consistently found positive relationships between CSCP implementation and student achievement reflected through course grades and graduation/retention rates (Sink et al., 2008) and achievement-related outcomes such as behavioral issues and attendance (Akos et al., 2019). Students who attend schools with more well established and fully implemented CSCPs are more likely to perform well academically and behaviorally (Akos et al., 2019). Additionally, researchers have found that school counselors who engage in multilevel services associated with a CSCP are more likely to have higher levels of wellness functioning compared to those who are less engaged in delivering these services (Randick et al., 2018). As such, CSCP implementation seems to not only be positively related to student development and achievement but also the overall well-being of school counselors.
Designing and implementing a culturally responsive CSCP demands a collaborative effort between both school counselors and educational partners to create and sustain an environment that is responsive to students’ diverse needs (ASCA, 2017). This ongoing and iterative process requires school counselors to be emotionally attuned with school, family, and community partners to co-construct, facilitate, and lead initiatives to more efficaciously implement equitable services within their programs (ASCA, 2019b; Bryan et al., 2017). School counselors must engage in leadership and be attentive toward their self- and other-awareness and management to traverse diverse contexts involving differences in personalities, values and goals, and ideologies (Mullen et al., 2018). Although researchers have reported that school counselors’ CSCP implementation is positively related to their leadership (e.g., Mason, 2010), no studies have investigated the relationship between emotional intelligence and CSCP implementation.
Emotional Intelligence
Emotional intelligence generally refers to the ability to recognize, comprehend, and manage the emotions of oneself and others to accomplish individual and shared goals (Kim & Kim, 2017). Scholars have purported that emotional intelligence can be subsumed into two overarching forms: trait emotional intelligence and ability emotional intelligence (Petrides & Furnham, 2000a, 2000b, 2001). Trait emotional intelligence, also known as trait emotional self-efficacy, involves “a constellation of behavioral dispositions and self-perceptions concerning one’s ability to recognize, process, and utilize emotional-laden information” (Petrides et al., 2004, p. 278). Ability emotional intelligence, also referred to as cognitive-emotional ability, concerns an individual’s emotion-related cognitive abilities (Petrides & Furnham, 2000b). Said differently, trait emotional intelligence is in the realm of an individual’s personality (e.g., social awareness), whereas ability emotional intelligence denotes an individual’s actual capabilities to perceive, understand, and respond to emotionally charged situations.
Over the past two decades, scholars have expanded the scope of emotional intelligence to have a deliberate focus on how emotional intelligence occurs within teams or groups in the workforce context (Jordan et al., 2002; Jordan & Lawrence, 2009). Given the salience of emotions in various professional and work contexts (e.g., Jordan & Troth, 2004), Jordan and colleagues’ (2002) Workgroup Emotional Intelligence Profile (WEIP) facilitates a better understanding of how emotional intelligence manifests in teams. The WEIP centralizes emotional intelligence around the “understanding of emotional processes” (Jordan et al., 2002, p. 197). Using the WEIP, researchers revealed that higher emotional intelligence scores are positively related to job satisfaction, organizational citizenship (e.g., performing competently under pressure), organizational commitment, and school and work performance (Miao et al., 2017a, 2017b; Van Rooy & Viswesvaran, 2004). Conversely, higher scores of emotional intelligence were negatively associated with turnover intentions and counterproductive behavior (Miao et al., 2017a, 2017b).
Emotional intelligence has also gained increased attention in the counseling literature. For example, Easton et al. (2008) found emotional intelligence as a significant predictor of counseling self-efficacy in the areas of attending to the counseling process and dealing with difficult client behavior. Following a two-phase investigation, Easton and colleagues demonstrated the stability of emotional intelligence during a 9-month timeframe in both groups of professional counselors and counselors-in-training; thus, the researchers argued that emotional intelligence may be an inherent characteristic associated with the career choice of counseling. In an earlier study with a sample with 108 school counselors, emotional intelligence was found to be significantly and uniquely related to school counselors’ multicultural counseling competence (Constantine & Gainor, 2001). More recently, school counselors’ emotional intelligence was found to be positively related to leadership self-efficacy and experience (Mullen et al., 2018).
School Counseling Leadership Practice
Leadership practice is a dynamic, interpersonal phenomenon within which school counselors engage in behaviors that mobilize support from educational partners to achieve programmatic and organizational objectives aimed at promoting student achievement and development (Hilts, Peters, et al., 2022). The focus on leadership practice entails an emphasis on the actual behavior of the individual, which scholars have contended is a byproduct of both individual and contextual factors in which these behaviors occur (Hilts, Liu, et al., 2022; Mischel & Shoda, 1998; Scarborough & Luke, 2008). For instance, school counselors’ support from other school partners (Dollarhide et al., 2008; Robinson et al., 2019) and previous leadership experience (Hilts, Liu, et al., 2022; Lowe et al., 2017) have been found to influence school counselors’ engagement in leadership. Hilts, Liu, and colleagues (2022) found that intra- and interpersonal factors significantly predicted school counselors’ engagement in leadership such as multicultural competence, leadership self-efficacy, and psychological empowerment. Across several models of leadership (e.g., Bolman & Deal, 1997; Kouzes & Posner, 1995), transformational leadership has been situated in the context of school counseling (Gibson et al., 2018).
Transformational School Counseling Leadership
Transformational leadership is described as behaviors aimed at encouraging others to enact leadership, challenge the status quo, and actively pursue learning and development to achieve higher performance (Bolman & Deal, 1997; Kouzes & Posner, 1995). Individuals employing transformational leadership foster a climate of trust and respect and inspire motivation among others by facilitating emotional attachments and commitment to others and the organization’s mission. More recently, Gibson et al. (2018) constructed and validated the School Counseling Transformational Leadership Inventory (SCTLI) in an effort to support school counselors in conceptualizing and informing their approach to leadership. The SCTLI (Gibson et al., 2018)—grounded in the ASCA National Model (ASCA, 2012) and the general transformational leadership literature (e.g., Avolio et al., 1991)—offers a framework to support engagement in leadership within a school context. For example, school counselors build partnerships with important decision-makers in the school and community and empower educational partners to act to improve the program and the school. School counselors engaging in transformational leadership ascribe to an egalitarian structure in which they engage in shared decision-making, promote a united vision, and inspire others to work toward positive change among students and the broader school community (Lowe et al., 2017). Beyond being studied as an outcome variable itself (Hilts, Liu, et al., 2022), school counselors’ enactment of leadership has also been found to be positively associated with their outcomes of CSCP implementation (Mason, 2010; Mullen et al., 2019).
Emotional Intelligence and the Mediating Role of Transformational Leadership
Over the past several decades, emotional intelligence has been increasingly attributed as a critical trait and ability of individuals employing effective leadership (Kim & Kim, 2017). For instance, Gray (2009) asserted that effective school leaders are able to perceive, understand, and monitor their own and others’ internal states and use this information to guide the thinking and actions of themselves and others. Mullen and colleagues (2018) found that, among a sample of 389 school counselors, domains of emotional intelligence (Jordan & Lawrence, 2009) were significant predictors of leadership self-efficacy and leadership experience. Specifically, Mullen et al.’s (2018) results showed that (a) awareness of own emotions and management of own and others’ emotions were positively related to leadership self-efficacy; (b) management of own and others’ emotions significantly predicted leadership experience; and (c) awareness and management of others’ emotions was positively associated with self-leadership.
Moreover, initial research has revealed that not only is emotional intelligence an antecedent of leadership (Barbuto et al., 2014; Harms & Credé, 2010; Mullen et al., 2018), but that leadership, particularly transformational leadership, mediates the relationship between emotional intelligence and job-related behavior such as job performance (Hur et al., 2011; Hussein & Yesiltas, 2020; Rahman & Ferdausy, 2014). For example, Hussein and Yesiltas’s (2020) results indicated that not only were higher scores of emotional intelligence positively associated with organizational commitment, but that transformational leadership partially mediated the relationship between emotional intelligence and organizational commitment. In another study, Hur and colleagues (2011) sought to examine whether transformational leadership mediated the link between emotional intelligence and multiple outcomes among 859 public employees across 55 teams. The researchers’ results showed that transformational leadership mediated the relationship between emotional intelligence and service climate, as well as between emotional intelligence and leadership effectiveness. Scholars have explained this relationship as the ability of individuals employing transformational leadership to inspire and motivate others to accomplish beyond self- and organizational expectations and redirect feelings of frustration from setbacks to constructive solutions (Hur et al., 2011; Hussein & Yesiltas, 2020).
Purpose of the Study
Taken together, emotional intelligence has been identified in the counseling literature as a significant predictor of counseling self-efficacy and competence (Constantine & Gainor, 2001; Easton et al., 2008). It has also been well established in the workforce literature as being positively related to job performance and leadership outcomes (Hussein & Yesiltas, 2020; Kim & Kim, 2017). The broader leadership literature also comprises evidence in support of the mediating role of transformational leadership between emotional intelligence and performance outcomes (Hur et al., 2011; Hussein & Yesiltas, 2020; Rahman & Ferdausy, 2014). Emotional intelligence has not been examined in relation to school counselors’ CSCP implementation and service outcomes, although CSCP implementation has been widely embraced as a core of the ASCA National Model. Likewise, although emotional intelligence has been studied with counseling practice and leadership separately, we identified no empirical research that has examined the mechanisms between school counselors’ emotional intelligence, transformational leadership practice, and outcomes of program implementation. The present study seeks to address these gaps. Thus, the two research questions that guided our study were: (a) Does school counselors’ emotional intelligence predict their CSCP implementation? and (b) Does engagement in transformational leadership practice mediate the relationship between emotional intelligence and CSCP implementation? Given the synergistic focus on collaboration (or teamwork) shared by the school and workforce contexts coupled with previous empirical evidence, we hypothesized that (a) school counselors’ emotional intelligence predicts their CSCP implementation, and (b) transformational leadership practice mediates the relationship between emotional intelligence and CSCP implementation.
Method
Research Design
In the present study, we utilized a correlational, cross-sectional survey design. We used the Statistical Package for Social Sciences (SPSS, version 27). To test our hypotheses, we performed a mediation analysis using Hayes’s PROCESS in order to establish the extent of influence of an independent variable on an outcome variable (through a mediator; Hayes, 2012). Mediation analysis answered how an effect occurred between variables and is based on the prerequisite that the independent variable/predictor is often considered the “causal antecedent” to the outcome variable of interest (Hayes, 2012, p. 3). Furthermore, we expected that the effects of school counselors’ emotional intelligence on their CSCP implementation would be partly explained by the effects of their engagement in transformational leadership.
Participants
Participants included for final analysis were 792 practicing school counselors in the United States, 94.6% (n = 749) of which reported to be certified/licensed as school counselors and 5.4% (n = 43) indicated to be either not certified/licensed or “unsure.” The sample’s geographic location was mostly suburban (n = 399, 50.4%), followed by rural (n = 195, 24.6%) and urban (n = 184, 23.2%); and 1.8% of participants (n = 14) did not disclose their setting. Public schools accounted for 86.2% (n = 683) of participants’ work settings, followed by charter (n = 42, 5.3%) and private (n = 40, 5.1%), while 3.4% (n = 27) of participants indicated “other” or did not disclose. For grade levels served by participants, 13% (n = 103) worked at the PK–4 level, 20.8% (n = 165) at the 5–8 level, 28.4% (n = 225) at the 9–12 level, and 37.8% (n = 299) worked at the combined K–12 level. Participants’ race/ethnicity included Asian/Native Hawaiian/Pacific Islander (n = 26, 3.3%), Multiracial (n = 47, 5.9%), Black/African American (n = 56, 7.1%), Hispanic/Latino (n = 70, 8.8%), and White (n = 593, 74.9%). Lastly, participants’ mean age was 43, ranging from 23 to 77 years of age. Of the 792 participants, 82.4% (n = 653) identified as cisgender female, 11.0% (n = 88) as cisgender male, 0.3% (n = 2) as transgender female, 0.3% (n = 2) as transgender male, 3.8% (n = 30) chose “prefer to self-identify,” and 2.2% (n = 17) chose “not to answer.” Our sample was representative of the larger population based on the results of a recent nationwide study by ASCA (2021), in which approximately 7,000 school counselors were surveyed; demographic statistics from that study similar to ours included 88% of participants working in public, non-charter schools; 19% working at the middle school level; and 24% working in urban schools..
Procedures and Data Collection
Prior to engaging in data collection, we received approval from our university’s IRB. According to our a priori power analysis conducted using G*Power 3.1 Software (Faul et al., 2007), a sample size of 558 participants would be considered sufficient for the current study, assuming a small effect size ( f 2 = 0.1); therefore, we attempted to achieve a nationally representative sample through a variety of recruitment methods. In efforts to represent the target population, non-probability sampling methods (Balkin & Kleist, 2016) were used and included either sending, posting, or requesting dissemination of a research recruitment message and survey link to (a) school counselors of current or former Recognized ASCA Model Program (RAMP)-designated school counseling programs, (b) state school counseling associations, (c) several closed groups on Facebook for school counselors, (d) the ASCA Scene online discussion forum, and (e) the university’s school counselor listserv. In addition, similar to recruitment methods used by Hilts and colleagues (2019) in previous school counseling research, we emailed ASCA members directly with an invitation to participate. We shared one to two follow-up announcements through these same methods between 2 to 4 weeks after the initial recruitment message.
The link within the research recruitment announcement directed participants to an informed consent page. After indicating their willingness to participate in the study, participants were then directed to the online survey managed by the Qualtrics platform. On average, the survey took approximately 15 minutes to complete.
Instrumentation
Demographic Questionnaire
The demographic questionnaire consisted of 18 questions asked of all eligible participants. The demographic form included questions about participants’ school level, geographic location, school type, and student caseload. We also asked participants about other demographic information including race/ethnicity, gender, age, and years of experience.
Workgroup Emotional Intelligence Profile
The Workgroup Emotional Intelligence Profile-Short Version (WEIP-S; Jordan & Lawrence, 2009), a shortened version of the WEIP (Jordan et al., 2002) and the WEIP-6 (Jordan & Troth, 2004), is a 16-item, self-report scale that measures participants’ emotional intelligence within a team context. Jordan and Lawrence (2009) selected just 25 behaviorally based items from the 30-item WEIP-6 (Jordan & Troth, 2004). Through confirmatory factor analyses (CFA) to achieve the best fit model, the final WEIP-S measure consisted of 16 items with four factors, each of which had good internal consistency reliability in the sample: awareness of own emotions (4 items, ⍺ = .85), management of own emotions (4 items, ⍺ = .77), awareness of others’ emotions (4 items, ⍺ = .88), and management of others’ emotions (4 items, ⍺ = .77). To enhance construct validity of the WEIP-S, Jordan and Lawrence employed model replication analyses and test-retest stability across three time periods. Examples of items from each dimension are (a) “I can explain the emotions I feel to team members” (awareness of own emotions); (b) “When I am frustrated with fellow team members, I can overcome my frustration” (management of own emotions); (c) “I can read fellow team members ‘true’ feelings, even if they try to hide them” (awareness of others’ emotions); and (d) “I can provide the ‘spark’ to get fellow team members enthusiastic” (management of others’ emotions). The items are measured on a Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree). For analyses, we summed scores of all dimensions, with higher scores indicating a greater amount of emotional intelligence. Cronbach’s ⍺ and McDonald’s omega (ω) for the WEIP-S were both .93, which indicated good internal consistency.
School Counseling Transformational Leadership Inventory
The SCTLI (Gibson et al., 2018) is a 15-item, self-report inventory that measures the leadership practices of school counselors. The items are measured on a Likert-type scale ranging from 1 (never) to 5 (always or almost always) and a total score indicates the self-reported level of engagement in overall leadership practices. Sample items on the SCTLI include “I have empowered parents and colleagues to act to improve the program and the school” and “I have used persuasion with decision-makers to accomplish school counseling goals.” Findings from Gibson et al.’s (2018) exploratory factor analyses (EFAs) and CFAs revealed a one-factor model of transformational leadership practices based on transformational leadership theory and responsibilities as described within the ASCA National Model (ASCA, 2019b; CFI = .94, TLI = .93, RMSEA = .08). Through Pearson’s correlation, the researchers revealed that concurrent validity was significant (r = .68, p < .01). Additionally, in their sample, Gibson et al. reported strong internal consistency reliability with a Cronbach’s α = .94. In the current study, Cronbach’s α and McDonald’s (ω) for the SCTLI were .93 and .94, respectively.
School Counseling Program Implementation
The School Counseling Program Implementation Survey-Revised (SCPIS-R; Clemens et al., 2010; Fye et al., 2020) is a self-report survey that measures school counselors’ level of CSCP implementation. The SCPIS-R (Fye et al., 2020), used in the current study, is a 14-item Likert-type scale ranging from 1 (not present) to 4 (fully implemented). The factor structure was established through two studies that utilized EFA (Clemens et al., 2010) and CFA (Fye et al., 2020) to test the factor structure. The data from the original study (Clemens et al., 2010) yielded a three-factor model structure of the SCPIS, which includes programmatic orientation (7 items, α = .79), school counselors’ use of computer software (3 items, α = .83), and school counseling services (7 items, α =. 81), and a total SCPIS of α = .87. That said, Fye et al.’s (2020) CFA findings suggested a modified two-factor model was a more appropriate fit; thus, the modified two-factor model structure of the SCPIS includes only programmatic orientation (7 items, α = .86) and school counseling services (7 items, α = .83) and a total SCPIS of α = .90. Examples from each factor are (a) needs assessments are completed regularly and guide program planning (programmatic orientation) and (b) services are organized so that all students are well served and have access to them (school counseling services). We calculated participants’ total SCPIS scores with higher scores indicating greater CSCP implementation (Mason, 2010; Mullen et al., 2019). In the present study, the SCPIS-R demonstrated good reliability (Cronbach’s α = .90; McDonald’s ω = .90) in our sample.
Data Analysis
Missing Data Analysis and Assumptions Test
We received a total of 1,128 responses. Of all these responses, 336 respondents missed a significant portion (over 70%) of one or more of the main scales (i.e., WEIP-S, SCTLI, and SCPIS-R). We assessed this portion of values as not missing completely at random (NMCAR), and we proceeded with employing listwise deletion to 336 cases. The data NMCAR may be because of the survey length and time commitment, which is discussed more in the Limitations section. With the remaining 792 cases, the missing values counted for 0.1%–0.7% of missing values across respective scales. We performed a Little’s Missing Completely at Random test using SPSS Statistics Version 26.0 with a nonsignificant chi-square value (p > .05), which suggested that the missing values (across cases) were missed completely at random. Therefore, we retained all 792 cases and followed multiple imputation (Scheffer, 2002) to replace the missing values, using SPSS. Our data met assumptions for mediation analysis, normality based on histograms, and linearity and homoscedasticity as demonstrated through the scatterplots generated from univariate analysis.
Mediation Analysis
In our mediation model (see Figure 1), given its combined trait-ability nature and stability over time, school counselors’ emotional intelligence was hypothesized as the causal antecedent to program implementation; we then hypothesized transformational leadership practice to be a mediator for the effect of school counselors’ emotional intelligence on program implementation. We tested our mediation model based on Baron and Kenny’s (1986) approach. Specifically, our mediation analysis entailed four steps involving (a) the role of school counselors’ emotional intelligence (X) in predicting CSCP implementation (Y), with the coefficient denoted as c to reflect the total effect that X has on Y; (b) the predictive role of school counselors’ emotional intelligence (X) on transformational leadership practice (M), with the coefficient denoted as a; (c) the effect of transformational leadership practice (M) on CSCP implementation (Y), controlling for the effect of emotional intelligence (X), with the coefficient denoted as b; and (d) the association between school counselors’ emotional intelligence (X) and CSCP implementation (Y), using transformational leadership practice (M) as a mediator with coefficient denoted as c’ (MacKinnon et al., 2012). The difference between the coefficients c and c’,
(c – c’), is the mediation effect of transformational leadership practice.
Figure 1
The Hypothesized Mediation Model

Note. SC = school counselors; CSCP = Comprehensive School Counseling Program.
Hayes’s PROCESS v3.5 (with 5,000 regenerated bootstrap samples) was used to perform the mediation analysis. Hayes’s PROCESS is an analytical function in SPSS used to specify and estimate coefficients of specified paths using ordinary least squares (OLS) regression (Hayes, 2012). We consulted Fritz and MacKinnon (2007) regarding sample adequacy for detecting a mediation effect. Specifically, in order to allow .80 power and a medium mediation effect size, a sample of 397 is recommended for Baron and Kenny’s test, and a sample of 558 is considered adequate to detect small effects via percentile bootstrap (Fritz & MacKinnon, 2007). As such, our sample size of 792 met both criteria. According to MacKinnon et al. (2012), the mediation effect is significant, if zero (0) is excluded from the designated confidence interval (95% in our study).
Results
Correlations
We performed a bivariate analysis on the main study variables of school counselors’ emotional intelligence (measured using the WEIP-S), transformational leadership practice (measured using the SCTLI), and school counselors’ CSCP implementation (measured using the SCPIS-R). School counselors’ emotional intelligence scores were positively correlated with their transformational leadership practice (r = .42, p < .001) and were positively correlated with their CSCP implementation (r = .34, p < .001). Similarly, school counselors’ transformational leadership practice was found to be positively correlated with CSCP implementation (r = .56, p < .001). Table 1 denotes the correlations among variables.
Table 1
Correlation Matrix of Study Variables
Variable |
EI |
TL |
CSCP |
EI |
– |
.42** |
.34** |
TL |
.42** |
– |
.56** |
CSCP |
.34** |
.56** |
– |
Note. EI = school counselors’ emotional intelligence scores; TL = school counselors’ transformational
leadership; CSCP = school counselors’ comprehensive school counseling program implementation.
**p < .001
Mediation Analysis Results
With the total effect model (Step 1), we found a positive relation between school counselors’ emotional intelligence (X) and their CSCP implementation (Y; coefficient c = 0.24; p < .001; CI [0.20, 0.29]). Namely, school counselors’ emotional intelligence scores significantly predicted their CSCP implementation. In Step 2, we found a positive association between school counselors’ emotional intelligence scores (X) and their transformational leadership practice (M; coefficient a = 0.38; p < .001; CI [0.32, 0.43]). In Step 3, school counseling transformational leadership practice (M) was found to significantly predict their CSCP implementation (Y; coefficient b = 0.40; p < .001, CI [0.35, 0.45]) while controlling for the effect of emotional intelligence (X). Lastly, after adding transformational leadership practice as a mediator, we noted a significant direct effect of emotional intelligence on school counselors’ CSCP implementation (coefficient c’ = 0.09; p = .0001; CI [0.05, 0.14]). We also detected a mediation effect (coefficient ab = 0.15 which equaled c – c’; p < .001; CI [0.12, 0.18]) of emotional intelligence on CSCP implementation through transformational leadership practice. The 95% confidence intervals did not include zero (0), so the path coefficients were significant.
We performed a Sobel test to further evaluate the significance of the mediation effect by school counseling transformational leadership practice, which yielded a Sobel test statistic of 9.97 with a p value of < .001. The Sobel outcome corroborated the significance of our mediated effect. To calculate the effect size of our mediation analysis, we generated kappa-squared value (k2; Preacher & Kelley, 2011). Our kappa-squared (k2) value of .17 suggested a medium effect size (Cohen, 1988). Table 2 demonstrates regression results for the effect of school counselors’ emotional intelligence on their CSCP implementation outcomes mediated by transformational leadership practice.
Table 2
Regression Results for Mediated Effect by Leadership Practice

Note. N = 792. EI = emotional intelligence; TL = transformational leadership; CSCP = comprehensive school counseling program; CI = 95% Confidence Interval. The 95% CI for ab is obtained by the bias-corrected bootstrap with 5,000 resamples.
aR2 (Y,X) is the proportion of variance in CSCP implementation explained by EI.
bR2 (M,X) is the proportion of variance in TL explained by EI.
cR2 (Y,MX) is the proportion of variance in CSCP implementation explained by EI and TL.
**p < .001.
Discussion
In this national sample of 792 practicing school counselors, we examined whether school counselors’ emotional intelligence predicts their CSCP implementation. We also investigated whether engagement in transformational leadership practice mediated the relationship between school counselors’ emotional intelligence and CSCP implementation. First, we found that school counselors who reported higher scores of emotional intelligence were also more likely to score higher in CSCP implementation. Given that designing and implementing a CSCP requires school counselors to engage in a culturally responsive and collaborative effort (ASCA, 2017), our result that suggested emotional intelligence is positively correlated with CSCP implementation is not entirely unpredicted. This result was consistent with previous evidence supporting the positive correlation between emotional intelligence and work performance (Miao et al., 2017a, 2017b; Van Rooy & Viswesvaran, 2004). The result also illustrated the predictive role of school counselors’ emotional intelligence on their CSCP implementation, beyond its significant association with counseling competencies (Constantine & Gainor, 2001; Easton et al., 2008).
Secondly, school counselors’ emotional intelligence was found to be positively associated with their engagement in transformational leadership. This result aligned with previous evidence that school counselors’ emotional intelligence is linked to leadership outcomes demonstrated through the workforce literature (Barbuto et al., 2014; Harms & Credé, 2010; Kim & Kim, 2017). Similarly, the result echoed Mullen et al.’s (2018) finding on the positive relationship between school counselors’ emotional intelligence and leadership scores measured by the Leadership Self-Efficacy Scale (LSES; Bobbio & Manganelli, 2009). Noteworthily, the LSES was normed and validated with college students. Our results advanced the school counseling literature and corroborated the relationship between emotional intelligence and school counseling transformational leadership measured by the SCTLI, a scale developed specifically for school counselors. Our results suggest that school counselors may actively attend to emotional processes in order to effectively enact transformational leadership practice.
Thirdly, we found that school counselors’ engagement in transformational leadership significantly mediated the relationship between their emotional intelligence and CSCP implementation. Because leadership is woven into the ASCA National Model and is considered an integral component of a CSCP (ASCA, 2019b), and school counselors are required to develop collaborative partnerships with a range of educational partners (ASCA, 2019a; Bryan et al., 2017), we were not surprised to find these two concepts were related to CSCP implementation. This result also aligns with empirical evidence in the broader leadership literature that transformational leadership mediated the relationship between emotional intelligence and work performance (Hur et al., 2011; Hussein & Yesiltas, 2020). This result is particularly meaningful in that it demonstrates school counseling leadership as either a significant predictor (Mason, 2010; Mullen et al., 2019) or an outcome variable itself (Hilts, Liu, et al., 2022; Mullen et al., 2018). It enables a more nuanced understanding of mechanisms involved in emotional intelligence, leadership, and program implementation in a school counseling context. To our best knowledge, the current study was the first study that found that through leadership practice, school counselors’ emotional intelligence may offer an indirect effect on their CSCP implementation.
Implications
Results of this study have implications for school counselor practice and school counselor training and supervision. Given the significant relationships between emotional intelligence, transformational leadership, and CSCP implementation, we suggest that practicing school counselors begin by assessing their emotional intelligence, transformational leadership, and CSCP implementation and then set goals to enhance their performance. This may be especially important considering that other research has suggested that school counselors’ engagement in leadership, as well as their other roles and responsibilities (e.g., multicultural competence; challenging co-workers about discriminatory practices) have changed since the onset of the COVID-19 pandemic (Hilts & Liu, 2022). For instance, Hilts and Liu’s (2022) results indicated that school counselors’ leadership practice scores were higher during the pandemic compared to prior to the COVID-19 outbreak.
Next, school counselors can seek resources and professional development opportunities to support their goals. For example, school counselors may benefit from professional development focused on social-emotional learning (SEL), given SEL’s competency approach to building collaborative relationships (Collaborative for Academic, Social, and Emotional Learning, n.d.). That said, school counselors should also seek supports to experientially integrate their intrapersonal, interpersonal, and systemic skills associated with emotional intelligence, transformational leadership, and CSCP implementation. Intentional application of the Model for Supervision of School Counseling Leadership (Hilts, Peters, et al., 2022) may provide one such example for both school counseling practitioners and those in training.
School counselor training programs can also identify meaningful opportunities to infuse emotional intelligence and transformational leadership into school counselor coursework and supervision. Scarborough and Luke (2008) identified the important role of exposure in training to models of successful CSCP implementation and related resources on subsequent self-efficacy. As such, not only can school counseling coursework infuse the ASCA National Model Implementation Guide: Manage & Assess (ASCA, 2019b) and the Making DATA Work: An ASCA National Model publication (ASCA, 2018) along with additional emotional intelligence and transformational leadership resources, school counseling faculty and supervisors should intentionally incorporate school counseling students’ ongoing exposure to practicing school counselors and supervisors with high scores of emotional intelligence and transformational leadership.
Limitations
As with all research, the results of this study need to be understood in consideration of the methodological strengths and limitations. Despite obtaining a large national sample, the data collection procedures used in this study prevented our ability to determine the survey response rate. As such, we are unable to make any claim about non-response bias and it is possible that school counselors who declined to participate significantly differed from those who completed the study. Relatedly, the sample included a proportionately large number of participants who started the survey but did not finish. It is possible that the attrition of these school counselors reflected an as of yet unidentified confounding construct that is also related to the variables under study (Balkin & Kleist, 2016). Our sample is nonetheless generally representative of the national school counselor demographic data reported in the recent state of the profession survey of approximately 7,000 school counselors (ASCA, 2021), strengthening the validity and subsequent generalizability of our results.
Another limitation of our study is that all data were cross-sectional and non-experimental. The correlation and mediation analyses used in the study demonstrate the strength of associations between the examined constructs, and do not reflect temporal or causal relationships. The cross-sectional design does not allow statistical control for the predictor and outcome variables; thus, it may not accurately specify the effect of the predictor on the mediator (Maxwell & Cole, 2007). Therefore, any inclination to impose intuitive logic or imbue directionality that emotional intelligence is an antecedent to either transformational leadership or CSCP implementation should be interpreted with caution. Further, all data from this study were collected at the same time and relied upon self-report. As such, common-method variance could have inflated the identified relationships between the constructs.
An important consideration is that this study was delineated to focus on illustrating individual path coefficients between emotional intelligence, leadership, and CSCP implementation and provides limited insight into understanding of complex relationships among latent variables. Likewise, we used Hayes’s PROCESS to examine our mediation model which features procedure rather than overall model fit created through more sophisticated statistical analyses such as structural equation modeling (SEM). Given that PROCESS is a modeling tool that relies on OLS regression, it may be biased in estimating effects without taking into consideration measurement error (Darlington & Hayes, 2017).
Suggestions for Future Research
The results of this study have numerous implications for future research. Future studies may explore the relationship between emotional intelligence and other forms of leadership prevalent in the counseling literature, such as charismatic democratic or servant leadership (Hilts, Peters, et al., 2022). In addition, because self-report emotional intelligence measures have been described as better to assess intrapersonal processes and ability emotional intelligence measures have been shown to be related to emotion-focused coping and work performance (Miao et al., 2017a, 2017b), future research may consider incorporating ability and mixed emotional intelligence measurements to examine a causal model of emotional intelligence and transformational leadership (or other forms of leadership).
Future research could extend the unit of analysis in this study (e.g., individual school counselor) and adopt a similar perspective to Lee and Wong (2019) to examine emotional intelligence in teams. Studies could similarly expand the use of self-report emotional intelligence measures and include ability or mixed emotional intelligence measurement. Relatedly, as Miao et al. (2017b) described significant moderator effects of emotional labor demands of jobs on the relationship between self-report emotional intelligence and job satisfaction, future research could assess this in the school counseling context, wherein the emotional labor demands of the work may vary. Given the robust workforce literature grounding associations between emotional intelligence and job performance, job satisfaction, organizational commitment, and resilience in the face of counterproductive behavior in the workplace (Hussein & Yesiltas, 2020), future school counseling research can examine emotional intelligence and other constructs, including ethical decision-making, belonging, attachment, burnout, and systemic factors.
Lastly, as most constructs involved in school counseling practice are latent variables in nature, we recommend future scholars consider SEM when it comes to investigating overall model fit between the variables of interest. SEM offers more specification to the model including goodness of fit of the model to the data (Hayes et al., 2018). It minimizes bias involved in mediation effect estimation with consideration of individual indicators for each latent variable (Kline, 2016).
Conclusion
As an initial examination of the relationship between emotional intelligence and CSCP implementation, as well as the role of school counselors’ transformational leadership in mediating the relationship between emotional intelligence and CSCP implementation, this study was grounded in the empirical scholarship on leadership in both school counseling and allied fields. We found support for our hypothesized model of school counselors’ emotional intelligence and their CSCP implementation, mediated by their engagement in transformational leadership. Our examination yielded evidence in support of the significant mediating role of school counselors’ transformational leadership engagement on the relationship between emotional intelligence and CSCP implementation. In the meantime, our results supported the robust reliability of three instruments in our sample: the WEIP-S (Jordan & Lawrence, 2009), the SCTLI (Gibson et al., 2018), and the SCPIS-R (Clemens et al., 2010; Fye et al., 2020), which can be useful for future school counseling researchers and practitioners. This study serves as an important necessary step in establishing these relationships, and we anticipate that our results will ground further investigation related to school counselors’ emotional intelligence, leadership practices, and CSCP implementation, including the development of additional measurements.
Conflict of Interest and Funding Disclosure
This study was partially funded by Chi Sigma
Iota International’s Excellence in Counseling
Research Grants Program.
References
Akos, P., Bastian, K. C., Domina, T., & de Luna, L. M. M. (2019). Recognized ASCA Model Program (RAMP) and student outcomes in elementary and middle schools. Professional School Counseling, 22(1), 1–9. https://doi.org/10.1177/2156759X19869933
American School Counselor Association. (2012). The ASCA national model: A framework for school counseling programs (3rd ed.).
American School Counselor Association. (2017). The school counselor and school counseling programs.
https://schoolcounselor.org/Standards-Positions/Position-Statements/ASCA-Position-Statements/The-School-Counselor-and-School-Counseling-Program
American School Counselor Association. (2018). Making data work: An ASCA National Model publication (4th ed.).
American School Counselor Association. (2019a). ASCA school counselor professional standards & competencies. https://www.schoolcounselor.org/getmedia/a8d59c2c-51de-4ec3-a565-a3235f3b93c3/SC-Competencies.pdf
American School Counselor Association. (2019b). The ASCA national model: A framework for school counseling programs (4th ed.).
American School Counselor Association. (2021). ASCA research report: State of the profession 2020. https://www.sch
oolcounselor.org/getmedia/bb23299b-678d-4bce-8863-cfcb55f7df87/2020-State-of-the-Profession.pdf
Avolio, B. J., Waldman, D. A., & Yammarino, F. J. (1991). Leading in the 1990s: The four I’s of transformational leadership. Journal of European Industrial Training, 15(4), 9–16. https://doi.org/10.1108/03090599110143366
Balkin, R. S., & Kleist, D. M. (2016). Counseling research: A practitioner-scholar approach. American Counseling Association.
Barbuto, J. E., Gottfredson, R. K., & Searle, T. P. (2014). An examination of emotional intelligence as an antecedent of servant leadership. Journal of Leadership & Organizational Studies, 21(3), 315–323.
https://doi.org/10.1177/1548051814531826
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. https://doi.org/10.1037/0022-3514.51.6.1173
Bobbio, A., & Manganelli, A. M. (2009). Leadership Self-Efficacy Scale. A new multidimensional instrument. TPM–Testing, Psychometrics, Methodology in Applied Psychology, 16(1), 3–24. https://www.tpmap.org/wp-content/
uploads/2014/11/16.1.1.pdf
Bolman, L. G., & Deal, T. E. (1997). Reframing organizations: Artistry, choice, and leadership (2nd ed.). Jossey-Bass.
Bryan, J. A., Young, A., Griffin, D., & Holcomb-McCoy, C. (2017). Leadership practices linked to involvement in school–family–community partnerships: A national study. Professional School Counseling, 21(1), 1–13. https://doi.org/10.1177/2156759X18761897
Clemens, E. V., Carey, J. C., & Harrington, K. M. (2010). The school counseling program implementation survey: Initial instrument development and exploratory factor analysis. Professional School Counseling, 14(2), 125–134. https://doi.org/10.1177/2156759X1001400201
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum. https://www.ut
stat.toronto.edu/~brunner/oldclass/378f16/readings/CohenPower.pdf
Collaborative for Academic, Social, and Emotional Learning. (n.d.). SEL in school districts. https://casel.org/systemic-implementation/sel-in-school-districts
Constantine, M. G., & Gainor, K. A. (2001). Emotional intelligence and empathy: Their relation to multicultural counseling knowledge and awareness. Professional School Counseling, 5(2), 131–137.
Darlington, R. B., & Hayes, A. F. (2017). Regression analysis and linear models: Concepts, applications, and implementation. Guilford.
Dollarhide, C. T., Gibson, D. M., & Saginak, K. A. (2008). New counselors’ leadership efforts in school counseling: Themes from a year-long qualitative study. Professional School Counseling, 11(4), 262–271. https://doi.org/10.1177/2156759X0801100407
Duquette, K. (2021). “We’re powerful too, you know?” A narrative inquiry into elementary school counselors’ experiences of the RAMP process. Professional School Counseling, 25(1), 1–15.
https://doi.org/10.1177/2156759X20985831
Easton, C., Martin, W. E., Jr., & Wilson, S. (2008). Emotional intelligence and implications for counseling
self-efficacy: Phase II. Counselor Education and Supervision, 47(4), 218–232.
https://doi.org/10.1002/j.1556-6978.2008.tb00053.x
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
Fritz, M. S., & MacKinnon, D. P. (2007). Required sample size to detect the mediated effect. Psychological Science, 18(3), 233–239. https://doi.org/10.1111/j.1467-9280.2007.01882.x
Fye, H. J., Memis, R., Soyturk, I., Myer, R., Karpinski, A. C., & Rainey, J. S. (2020). A confirmatory factor analysis of the school counseling program implementation survey. Journal of School Counseling, 18(24). http://www.jsc.montana.edu/articles/v18n24.pdf
Gibson, D. M., Dollarhide, C. T., Conley, A. H., & Lowe, C. (2018). The construction and validation of the school counseling transformational leadership inventory. Journal of Counselor Leadership and Advocacy, 5(1), 1–12. https://doi.org/10.1080/2326716X.2017.1399246
Gray, D. (2009). Emotional intelligence and school leadership. International Journal of Educational Leadership Preparation, 4(4), n4.
Harms, P. D., & Credé, M. (2010). Emotional intelligence and transformational and transactional leadership: A meta-analysis. Journal of Leadership & Organizational Studies, 17(1), 5–17. https://doi.org/10.1177/1548051809350894
Hayes, A. F. (2012). PROCESS: A versatile computational tool for observed variable mediation, moderation, and conditional process modeling [White paper].
Hayes, A. F., Montoya, A. K., & Rockwood, N. J. (2017). The analysis of mechanisms and their contingencies: PROCESS versus structural equation modeling. Australasian Marketing Journal, 25(1), 76–81.
https://doi.org/10.1016/j.ausmj.2017.02.001
Hilts, D., Kratsa, K., Joseph, M., Kolbert, J. B., Crothers, L. M., & Nice, M. L. (2019). School counselors’ perceptions of barriers to implementing a RAMP-designated school counseling program. Professional School Counseling, 23(1), 1–11. https://doi.org/10.1177/2156759X19882646
Hilts, D., & Liu, Y. (2022). School counselors’ perceived school climate, leadership practice, psychological empowerment, and multicultural competence before and during COVID-19. Niagara University [Unpublished manuscript].
Hilts, D., Liu, Y., Li, D., & Luke, M. (2022). Examining ecological factors that predict school counselors’ engagement in leadership practices. Professional School Counseling, 26(1), 1–14. https://doi.org/10.1177/2156759X221118042
Hilts, D., Peters, H. C., Liu, Y., & Luke, M. (2022). The model for supervision of school counseling leadership. Journal of Counselor Leadership and Advocacy, 1–16. https://doi.org/10.1080/2326716X.2022.2032871
Hur, Y., van den Berg, P. T., & Wilderom, C. P. M. (2011). Transformational leadership as a mediator between emotional intelligence and team outcomes. The Leadership Quarterly, 22(4), 591–603.
https://doi.org/10.1016/j.leaqua.2011.05.002
Hussein, B., & Yesiltas, M. (2020). The influence of emotional intelligence on employee’s counterwork behavior and organizational commitment: Mediating role of transformational leadership. Revista de Cercetare si Interventie Sociala, 71, 377–402. https://doi.org/10.33788/rcis.71.23
Jordan, P. J., Ashkanasy, N. M., Härtel, C. E., & Hooper, G. S. (2002). Workgroup emotional intelligence: Scale development and relationship to team process effectiveness and goal focus. Human Resource Management Review, 12(2), 195–214. https://doi.org/10.1016/S1053-4822(02)00046-3
Jordan, P. J., & Lawrence, S. A. (2009). Emotional intelligence in teams: Development and initial validation of the short version of the Workgroup Emotional Intelligence Profile (WEIP-S). Journal of Management and Organization, 15(4), 452–469. https://doi.org/10.1017/S1833367200002546
Jordan, P. J., & Troth, A. C. (2004). Managing emotions during team problem solving: Emotional intelligence and conflict resolution. Human Performance, 17(2), 195–218. https://doi.org/10.1207/s15327043hup1702_4
Kim, H., & Kim, T. (2017). Emotional intelligence and transformational leadership: A review of empirical studies. Human Resource Development Review, 16(4), 377–393. https://doi.org/10.1177/1534484317729262
Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). Guilford.
Kouzes, J. M., & Posner, B. Z. (1995). The leadership challenge: How to keep getting extraordinary things done in organizations (1st ed.). Jossey-Bass.
Lee, C., & Wong, C.-S. (2019). The effect of team emotional intelligence on team process and effectiveness. Journal of Management & Organization, 25(6), 844–859. https://doi.org/10.1017/jmo.2017.43
Lowe, C., Gibson, D. M., & Carlson, R. G. (2017). Examining the relationship between school counselors’ age, years of experience, school setting, and self-perceived transformational leadership skills. Professional School Counseling, 21(1b), 1–7. https://doi.org/10.1177/2156759X18773580
MacKinnon, D. P., Cheong, J., & Pirlott, A. G. (2012). Statistical mediation analysis. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology, Vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 313–331). American Psychological Association.
Mason, E. C. M. (2010). Leadership practices of school counselors and counseling program implementation. National Association of Secondary School Principals Bulletin, 94(4), 274–285. https://doi.org/10.1177/0192636510395012
Maxwell, S. E., & Cole, D. A. (2007). Bias in cross-sectional analyses of longitudinal mediation. Psychological Methods, 12(1), 23–44. https://doi.org/10.1037/1082-989X.12.1.23
Miao, C., Humphrey, R. H., & Qian, S. (2017a). A meta-analysis of emotional intelligence and work attitudes. Journal of Occupational and Organizational Psychology, 90(2), 177–202. https://doi.org/10.1111/joop.12167
Miao, C., Humphrey, R. H., & Qian, S. (2017b). Are the emotionally intelligent good citizens or counterproductive? A meta-analysis of emotional intelligence and its relationships with organizational citizenship behavior and counterproductive work behavior. Personality and Individual Differences, 116, 144–156. https://doi.org/10.1016/j.paid.2017.04.015
Mischel, W., & Shoda, Y. (1998). Reconciling processing dynamics and personality dispositions. Annual Review of Psychology, 49, 229–258. https://doi.org/10.1146/annurev.psych.49.1.229
Mullen, P. R., Gutierrez, D., & Newhart, S. (2018). School counselors’ emotional intelligence and its relationship to leadership. Professional School Counseling, 21(1b), 1–12. https://doi.org/10.1177/2156759X18772989
Mullen, P. R., Newhart, S., Haskins, N. H., Shapiro, K., & Cassel, K. (2019). An examination of school counselors’ leadership self-efficacy, programmatic services, and social issue advocacy. Journal of Counselor Leadership and Advocacy, 6(2), 160–173. https://doi.org/10.1080/2326716X.2019.1590253
Petrides, K. V., Frederickson, N., & Furnham, A. (2004). The role of trait emotional intelligence in academic performance and deviant behavior at school. Personality and Individual Differences, 36(2), 277–293.
https://doi.org/10.1016/S0191-8869(03)00084-9
Petrides, K. V., & Furnham, A. (2000a). Gender differences in measured and self-estimated trait emotional intelligence. Sex Roles, 42, 449–461. https://doi.org/10.1023/A:1007006523133
Petrides, K. V., & Furnham, A. (2000b). On the dimensional structure of emotional intelligence. Personality and Individual Differences, 29(2), 313–320. https://doi.org/10.1016/S0191-8869(99)00195-6
Petrides, K. V., & Furnham, A. (2001). Trait emotional intelligence: Psychometric investigation with reference to established trait taxonomies. European Journal of Personality, 15(6), 425–448. https://doi.org/10.1002/per.416
Preacher, K. J., & Kelley, K. (2011). Effect size measures for mediation models: Quantitative strategies for communicating indirect effects. Psychological Methods, 16(2), 93–115. https://doi.org/10.1037/a0022658
Rahman, S., & Ferdausy, S. (2014). Relationship between emotional intelligence and job performance mediated by transformational leadership. NIDA Development Journal, 54(4), 122–153. https://doi.org/10.14456/ndj.2014.1
Randick, N. M., Dermer, S., & Michel, R. E. (2018). Exploring the job duties that impact school counselor wellness: The role of RAMP, supervision, and support. Professional School Counseling, 22(1).
https://doi.org/10.1177/2156759X18820331
Robinson, D. M., Mason, E. C. M., McMahon, H. G., Flowers, L. R., & Harrison, A. (2019). New school counselors’ perceptions of factors influencing their roles as leaders. Professional School Counseling, 22(1), 1–15. https://doi.org/10.1177/2156759X19852617
Scheffer, J. (2002). Dealing with missing data. Research Letters in the Information and Mathematical Sciences, 3, 153–160. http://hdl.handle.net/10179/4355
Shillingford, M. A., & Lambie, G. W. (2010). Contribution of professional school counselors’ values and leadership practices to their programmatic service delivery. Professional School Counseling, 13(4), 208–217. https://doi.org/10.1177/2156759X1001300401
Sink, C. A., Akos, P., Turnbull, R. J., & Mvududu, N. (2008). An investigation of comprehensive school counseling programs and academic achievement in Washington state middle schools. Professional School Counseling, 12(1). https://doi.org/10.1177/2156759X0801200105
Van Rooy, D. L., & Viswesvaran, C. (2004). Emotional intelligence: A meta-analytic investigation of predictive
validity and nomological net. Journal of Vocational Behavior, 65(1), 71–95.
https://doi.org/10.1016/S0001-8791(03)00076-9
Derron Hilts, PhD, NCC, is an assistant professor at Niagara University. Yanhong Liu, PhD, NCC, is an associate professor at Syracuse University. Melissa Luke, PhD, NCC, is a dean’s professor at Syracuse University. Correspondence may be addressed to Derron Hilts, 5795 Lewiston Rd, Niagara University, NY 14109, dhilts@niagara.edu.
Nov 9, 2022 | Volume 12 - Issue 3
Michael T. Kalkbrenner, Gabriella Miceli
Meeting the mental health needs of students enrolled in science, technology, engineering, and mathematics (STEM) majors is particularly challenging for professional counselors who work in college settings, as STEM students are a subgroup of college students that face unique risks for developing mental health issues. The scarcity of literature on STEM student mental health coupled with their reticence to seek counseling is concerning. An important next step in this line of research is understanding why STEM students are reticent to seek counseling. Accordingly, the present investigators validated STEM students’ scores on the Revised Fit, Stigma, and Value (RFSV) Scale, a screening tool for measuring barriers to seeking counseling. Results also established the capacity of STEM students’ RFSV scores to predict peer-to-peer referrals to the counseling center and revealed demographic differences in barriers to counseling. Findings have implications for enhancing professional counselors’ efforts to support STEM students’ mental health.
Keywords: Revised Fit, Stigma, and Value Scale; STEM; student mental health; barriers to counseling; peer-to-peer referrals
The frequency and complexity of college students presenting with mental health issues is a notable concern for professional counselors who work in university settings (Al-Maraira & Shennaq, 2021; Hong et al., 2022). Students enrolled in science, technology, engineering, and mathematics (STEM) majors are a distinctive group of college students who face unique risks for developing mental health issues (Daker et al., 2021; Kalkbrenner, James, & Pérez-Rojas, 2022; Lipson et al., 2016; Shapiro & Sax, 2011). When compared to their non-STEM counterparts, STEM students are less likely to recognize warning signs of mental distress, and they access mental health support services at lower rates than their peers. In addition, the harsh and competitive academic environment in STEM majors can exacerbate students’ risk for mental health distress (Lipson et al., 2016; Shapiro & Sax, 2011). Moreover, Rice et al. (2015) demonstrated that STEM students exhibit higher levels of maladaptive perfectionism, which is associated with higher levels of mental distress.
Whereas substantial academic and financial resources exist to support STEM students (U.S. Department of Education, 2020), there is a dearth of literature on supporting STEM students’ mental health, which is essential for retaining students and ensuring their success both in and out of the classroom (Kivlighan et al., 2021; Schwitzer et al., 2018). This gap in the literature is concerning, as STEM students are at risk for mental health issues, which can lead to attrition, isolation, and suicide (Daker et al., 2021; Kalkbrenner, James, & Pérez-Rojas, 2022; Lipson et al., 2016). As just one example, academic mental health distress is a significant predictor of lower enrollment and completion rates in STEM fields (Daker et al., 2021). Moreover, Muenks et al. (2020) found that higher levels of psychological vulnerability among STEM students was a significant predictor of lower class attendance, higher dropout intentions, and less class engagement.
The literature is lacking research on why STEM students tend to seek counseling at lower rates than non-STEM students. One of the first steps in supporting STEM students’ mental health is validating scores on a screening tool for identifying barriers to accessing mental health support services among STEM students. Although screening tools that appraise barriers to counseling exist, none of them have been validated with STEM students. The Revised Fit, Stigma, and Value (RFSV) Scale is a screening tool for appraising barriers to counseling that has been normed with non–college-based populations (e.g., adults in the United States; Kalkbrenner & Neukrug, 2018) and college students with mental health backgrounds (e.g., graduate counseling students; Kalkbrenner & Neukrug, 2019), as just a few examples. When compared to the existing normative RFSV Scale samples, STEM students are a distinct college student population who utilize counseling services at lower rates than students in mental health majors (e.g., psychology; Kalkbrenner, James, & Pérez-Rojas, 2022). The psychometric properties of instrumentation can fluctuate significantly between different populations, and researchers and practitioners have an ethical obligation to validate scores on instruments before interpreting the results with untested populations (Mvududu & Sink, 2013). Accordingly, the primary aims of the present study were to validate STEM students’ scores on the RFSV Scale (Kalkbrenner & Neukrug, 2019), test the capacity of RFSV scores for predicting referrals to the counseling center, and investigate demographic differences in STEM students’ RFSV scores.
The Revised Fit, Stigma, and Value (RFSV) Scale
Neukrug et al. (2017) developed and validated scores on the original version of the Fit, Stigma, and Value (FSV) Scale for appraising barriers to counseling among a large sample of human services professionals. The FSV Scale contains the three following subscales or latent traits behind why one would be reluctant to seek personal counseling: Fit, Stigma, and Value. Kalkbrenner et al. (2019) validated scores on a more concise version of the FSV Scale, which became known as the RFSV Scale, which includes the same three subscales as the original version. Building on this line of research, Kalkbrenner and Neukrug (2019) found a higher-order factor, the Global Barriers to Counseling scale. The Global Barriers to Counseling scale is composed of a total composite score across the three single-order subscales (Fit, Stigma, and Value). Accordingly, the Fit, Stigma, and Value subscales can be scored separately and/or users can compute a total score for the higher-order Global Barriers to Counseling scale.
Scores on the RFSV Scale have been validated with a number of non-college populations, including adults in the United States (Kalkbrenner & Neukrug, 2018), professional counselors (Kalkbrenner et al., 2019), counselors-in-training (Kalkbrenner & Neukrug, 2019), and high school students (Kalkbrenner, Goodman-Scott, & Neukrug, 2020). If scores are validated with STEM students, the RFSV Scale could be used to enhance professional counselors’ mental health screening efforts to understand and promote STEM student mental health. Specifically, campus-wide mental health screening has implications for promoting peer-to-peer mental health support. For example, college counselors are implementing peer-to-peer mental health support initiatives by training students to recognize warning signs of mental distress in their peers and, in some instances, refer them to college counseling services (Kalkbrenner, Sink, & Smith, 2020).
Peer-to-Peer Mental Health Support
College students tend to discuss mental health concerns with their peers more often than with a faculty member or student affairs professional (Wawrzynski et al., 2011; Woodhead et al., 2021). To this end, the popularity and utility of peer-to-peer mental health support initiatives has grown in recent years (Kalkbrenner, Lopez, & Gibbs, 2020; Olson et al., 2016). The effectiveness of these peer-to-peer support initiatives can be evaluated by test scores (e.g., scores on mental distress and well-being inventories) as well as non-test criteria (e.g., increases in the frequency of peer-to-peer mental health referrals). For example, Olson et al. (2016) found that college students who attended a Recognize & Refer workshop were significantly more likely to refer a peer to counseling when compared to students who did not attend the workshop. Similarly, Kalkbrenner, Lopez, and Gibbs (2020) found that increases in college students’ awareness of warning signs for mental distress were predictive of substantial increases in the odds of making peer-to-peer referrals to the counseling center.
Peer-to-peer mental health support also has implications for improving college student mental health (Bryan & Arkowitz, 2015; Byrom, 2018; Caporale-Berkowitz, 2022). For example, Bryan and Arkowitz (2015) found that peer-run support programs for depression were associated with significant reductions in depressive symptoms. In addition, Byrom (2018) demonstrated that peer support interventions were associated with increases in college students’ well-being. The synthesized results of the studies cited in this section suggest that peer-to-peer mental health support has utility for promoting mental health among general samples of undergraduate college students. However, to the best of our knowledge, the literature is lacking research on peer-to-peer mental health support with STEM majors, a subgroup of college students with unique mental health needs (Daker et al., 2021; Lipson et al., 2016; Shapiro & Sax, 2011).
The Present Study
College counseling services are a valuable resource for students, as attendance in counseling is associated with increases in GPA and retention rates (Kivlighan et al., 2021; Lockard et al., 2019; Schwitzer et al., 2018). Considering STEM students’ unique vulnerability to mental health distress (Daker et al., 2021; Lipson et al., 2016; Shapiro & Sax, 2011) and their reticence to seek counseling (Kalkbrenner, James, & Pérez-Rojas, 2022), professional counselors who work in university settings need screening tools with validated scores for identifying why STEM students might avoid accessing counseling services. The RFSV Scale has potential to fill this gap in the measurement literature, as a number of recent psychometric studies (e.g., Kalkbrenner, Goodman-Scott, & Neukrug, 2020; Kalkbrenner & Neukrug, 2018) demonstrated support for the psychometric properties of scores on the RFSV Scale with non-college populations. However, the literature is lacking a screening tool for appraising barriers to counseling with validated scores among STEM students. Accordingly, a score validation study with STEM students is an important next step in this line of research, as the internal structure of instrumentation can vary notably between different samples (Mvududu & Sink, 2013). The literature is also lacking research on the potential of peer-to-peer mental support (e.g., students recognizing and referring a peer to counseling) among STEM students. This is another notable gap in the literature, as college students are more likely to discuss mental health concerns with a peer than with faculty or other university personnel (Wawrzynski et al., 2011; Woodhead et al., 2021). If STEM students’ scores on the RFSV Scale are validated, we will proceed to test the capacity of scores for predicting peer-to-peer referrals to the counseling center as well as examine demographic differences in STEM students’ RFSV scores.
The findings of the present investigation have implications for campus-wide mental health screening, increasing peer-to-peer mental health support, and identifying subgroups of STEM students that might be particularly reticent to seek counseling. To this end, the following research questions (RQs) and hypotheses (Ha) guided the present investigation: RQ1: Is the internal structure of scores on the RFSV Scale confirmed with STEM students? Ha1: The dimensionality of the RFSV Scale will be confirmed with STEM students. RQ2: Are STEM students’ RFSV scores significant predictors of making at least one referral to the counseling center? Ha2: Higher RFSV scores will emerge as a statistically significant positive predictor of STEM students making one or more peer referrals to the counseling center. RQ3: Are there significant demographic differences in FSV barriers to counseling among STEM students? Ha3: Statistically significant demographic differences in STEM students’ RFSV scores will emerge.
Methods
Participants and Procedures
Following IRB approval, first author Michael T. Kalkbrenner obtained an email list from the Office of University Student Records of all students who were enrolled in a STEM major at a research-intensive university with four campus locations in three cities located in the Southwestern United States. A recruitment message was sent out to the email list via Qualtrics Secure Online Survey Platform. A total of 407 prospective participants clicked on the survey link. A response rate could not be calculated, as Qualtrics does not track inaccurate or inactive email addresses. A review of the raw data revealed 41 cases with 100% missing data. Likely, these 41 prospective participants clicked on the link to the survey and decided not to participate. Following the removal of those 41 cases, less than 20% of data were missing for the remaining 366 cases. Little’s Missing Completely at Random test indicated that the data could be treated as missing completely at random (p = .118) and expectation maximization was used to impute missing values. An investigation of standardized z-scores revealed six univariate outliers (z > ± 3.29) and Mahalanobis distances displayed eight multivariate outliers, which were removed from the data set, yielding a robust sample of N = 352.
Participants ranged in age from 18 to 63 (M = 24.29; SD = 8.59). The demographic profile for gender identity consisted of 65.1% (n = 229) female, 30.4% (n = 107) male, 2.0% (n = 7) non-binary, 1.1% (n = 4) transgender, 0.6% (n = 2) an identity not listed (“please specify”), and 0.9% (n = 3) prefer not to answer. The ethnoracial demographic profile consisted of 2.6% (n = 9) Native Indian or Alaska Native; 3.1% (n = 11) Asian or Asian American; 2.0% (n = 7) Black or African American; 48.3% (n = 170) Hispanic, Latinx, or Spanish origin; 2.0% (n = 7) Middle Eastern or North African; 3.4% (n = 12) Multiethnic; 36.6% (n = 129) White or European American; 1.1% (n = 4) Another race, ethnicity, or origin (“please specify”); and 0.9% (n = 3) preferred not to answer. The present sample was composed of notably more diverse groups of STEM students when compared to national estimates of STEM students (National Center for Educational Statistics [NCES], 2020). The NCES’s estimates revealed fewer women (33.0%, n = 263,034) and Latinx (12.3%, n = 94,927) STEM students as well as fewer White students (49.8%, n = 385,132). But the NCES’s national estimates included larger proportions of Black (7.2%, n = 55,642) and Asian (11.0%, n = 85,135) STEM students when compared to the present sample.
Instrumentation
Participants completed a demographic questionnaire by indicating their informed consent, then confirming they met the following inclusion criteria for participation: (a) 18 years or older, (b) enrolled in at least one undergraduate STEM course, and (c) currently a STEM major. The demographic questionnaire concluded with questions about respondents’ age, gender identity, ethnoracial identity, help-seeking history, and if they had referred one or more peers to the counseling center.
The Revised FSV Scale
The RFSV Scale is a screening tool that was designed to measure barriers to seeking counseling (Kalkbrenner, Neukrug, & Griffith, 2019). Participants respond to a prompt (“I am less likely to attend counseling because . . . ”) for 14 declarative statements on the following Likert scale: 1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, or 5 = Strongly Agree. The RFSV Scale is composed of three subscales or latent traits behind one’s reticence to seek counseling, including Fit, Stigma, and Value. Scores on the Fit subscale can range from 5 to 25, with higher scores indicating more restraint from seeking counseling because one believes the process of counseling is not suitable with their personal worldview (e.g., “I couldn’t find a counselor who would understand me”). Scores on the Stigma subscale also range from 5 to 25, and higher scores denote a greater hesitation to seek counseling due to feelings of embarrassment or shame (e.g., “It would damage my reputation”). Scores on the Value subscale range from 4 to 20, with higher scores indicating a greater disinclination to seek counseling because they believe the effort required would not be worth the potential benefits (e.g., “Counseling is unnecessary because my problems will resolve naturally”).
The Global Barriers to Counseling scale is composed of test takers’ average composite score across the three Fit, Stigma, and Value subscales and produces an overall estimation of a test taker’s sensitivity to barriers toward seeking counseling. Scores on the Global Barriers to Counseling scale range from 13 to 65, with higher scores indicating a greater reticence to seek counseling. The collective findings of past investigators demonstrated evidence for the internal structure validity (confirmatory factor analysis) and internal consistency reliability (α = .70 to α = .91) of scores on the RFSV Scale with a number of non-college populations (Kalkbrenner, Goodman-Scott, & Neukrug, 2020; Kalkbrenner & Neukrug, 2018, 2019; Kalkbrenner et al., 2019).
Data Analysis
A confirmatory factor analysis (CFA) based on structural equation modeling was computed in IBM SPSS AMOS version 26 to answer the first RQ about the dimensionality of STEM students’ RFSV scores. We used the joint suggestions from Dimitrov (2012) and Schreiber et al. (2006) for acceptable model fit in CFA: chi-square absolute fit index (CMIN; non-significant p-value or χ2 to df < 3), comparative fit index (CFI; .90 to .95 = acceptable fit and > .95 = close fit), root mean square error of approximation (RMSEA; ≤ .08), and the standardized root mean square residual (SRMR; ≤ .08). Internal consistency reliability evidence of test scores is another important step in testing a scale’s psychometric properties. Cronbach’s coefficient alpha (α) is the most popular internal consistency reliability estimate; however, its proper use is dependent on the data meeting several statistical assumptions (McNeish, 2018). Composite internal consistency reliability estimates, such as McDonald’s coefficient omega (ω), tend to produce more stable reliability estimates of scores. Accordingly, the present investigators computed both α and ω.
College students are more likely to discuss mental health concerns with their peers than with faculty, staff, or other university personnel (Wawrzynski et al., 2011; Woodhead et al., 2021). Accordingly, college counseling researchers and practitioners are devoting more time to peer-to-peer mental health support initiatives with the goal of increasing peer-to-peer referrals to the counseling center (Kalkbrenner, Sink, & Smith, 2020; Olson et al., 2016). Past investigators (e.g., Kalkbrenner, Neukrug, & Esquivel, 2022) found that the RFSV barriers were significant predictors of peer-to-peer referrals to the counseling center with non-STEM students. To test the generalizability of this finding with STEM students, we conducted a logistic regression analysis to answer the second RQ regarding the capacity of STEM students’ RFSV scores to predict at least one peer referral to the counseling center. STEM students’ interval-level composite scores on the Fit, Stigma, and Value subscales were entered into the model as predictor variables. The criterion variable was quantified on a categorical scale. On the demographic questionnaire, students responded to the following question: “Have you ever referred (recommended) another student to counseling services?” and selected either “0 = never referred a peer to the counseling center” or “1 = referred one or more peers to the counseling center.”
A 2(gender) X 3(race/ethnicity) X 2(help-seeking history) multivariate analysis of variance (MANOVA) was computed to investigate the third RQ regarding demographic differences in RFSV barriers among STEM students. The three categorical-level independent variables included gender (male or female), race/ethnicity (Latinx, White, or other ethnicity), and help-seeking history (never attended counseling or attended at least one counseling session). The three interval-level dependent variables included STEM students’ composite scores on the Fit, Stigma, and Value subscales. Discriminant analysis was employed as a post hoc test for MANOVA (Warne, 2014).
Results
The RFSV Scale items were entered into a CFA to test the dimensionality of scores with STEM students (RQ1). Excluding the CMIN (χ2 [74] = 257.55, p < .001, χ2 to df = 3.48), results revealed a satisfactory model fit: CFI = .92; RMSEA = .08, 90% CI [.07, .10]; and SRMR = .08. The CMIN tends to underestimate model fit with samples that are large enough for CFA (Dimitrov, 2012). Thus, adequate internal structure validity evidence of scores was achieved based on the collective CFI, RMSEA, and SRMR results. The standardized factor loadings were all acceptable-to-strong and ranged from .48 to .90 (see Figure 1, Model 1).
Figure 1
Revised FSV Scale Path Models With Standardized Coefficients

Based on the findings of Kalkbrenner and Neukrug (2019), we computed a higher-order confirmatory factor analysis (HCFA) to test for a Global Barriers to Counseling scale. As expected, the single-factor RFSV model (see Figure 1, Model 2) revealed poor model fit: CMIN (χ2 [77] = 1,013.71, p < .001, χ2 to df = 13.17); CFI = .61; RMSEA = .19, 90% CI [.18, .20]; and SRMR = .13. Accordingly, the theoretical support for a higher-order model (Kalkbrenner & Neukrug, 2019) coupled with the poor fitting single-factor model (see Figure 1, Model 2) indicated that computing an HCFA was appropriate. Except for the CMIN (χ2 [74] = 257.55, p < .001, χ2 to df = 3.48), the higher-order model (see Figure 1, Model 3) displayed a satisfactory model fit: CFI = .92; RMSEA = .08, 90% CI [.07, .10]; and SRMR = .08. Tests of internal consistency reliability revealed satisfactory reliability evidence of scores on the Fit (α = .84, ω = .83), Stigma (α = .86, ω = .87), and Value (α = .79, ω = .79) subscales and the Global Barriers to Counseling scale (α = .88, ω = .88).
STEM students’ RFSV scores were entered into a logistic regression analysis to answer RQ2 regarding the capacity of STEM students’ RFSV scores to predict at least one referral to the counseling center. The logistic regression model was statistically significant, X2(1) = 80.97, p < .001, Nagelkerke R2 = .064. The odds ratios, Exp(B), revealed that a decrease of one unit in STEM students’ scores on the Value subscale (higher scores = less value toward counseling) was associated with a decrease in the odds of having made at least one peer-to-peer referral to the counseling center by a factor of .559.
A factorial MANOVA was computed to answer RQ3 regarding demographic differences in RFSV barriers among STEM students. A significant main effect emerged for gender on the combined dependent variables, F(3, 316) = 5.23, p = .002, Pillai’s Trace = 0.05, η2p = 0.047. The post hoc discriminant analysis (DA) revealed a significant discriminant function, Wilks λ = 0.93, χ2 = 23.60, df = 3, canonical correlation = 0.26, p < .001. The standardized canonical discriminant function coefficients between the latent factors and discriminant functions showed that the Value factor loaded more strongly on the discriminant function (1.10) than the Stigma (0.17) or Fit (−0.62) factors. The mean discriminant score on the function for male participants was 0.40. The mean discriminant score on the function for female participants was −0.19. In other words, the MANOVA and post hoc DA revealed that male STEM students scored significantly higher (higher scores reflect greater reluctance to seek counseling) on the Value barrier when compared to female STEM students.
A significant main effect also emerged for help-seeking history on the combined dependent variables, F(3, 467) = 4.65, p = .003, Pillai’s Trace = 0.04, η2p = 0.042. The post hoc DA displayed a significant discriminant function, Wilks λ = 0.93, χ2 = 24.10, df = 3, canonical correlation = 0.26, p < .001. The standardized canonical discriminant function coefficients between the latent factors and discriminant functions showed that the Value factor loaded more strongly on the discriminant function (1.10) than the Stigma (0.01) or Fit (−0.71) factors. The mean discriminant score on the function for participants without a help-seeking history was 0.25. The mean discriminant score on the function for participants with a help-seeking history was −0.29. In other words, the MANOVA and post hoc DA showed that STEM students without a help-seeking history scored significantly higher on the Value barrier than STEM students with a help-seeking history.
Discussion
The purpose of the present study was to validate STEM students’ scores on the RFSV Scale and investigate demographic correlates with the Fit, Stigma, and Value barriers. The CFA results demonstrated that the RFSV Scale and its dimensions were estimated adequately with a sample of STEM students. This finding is consistent with the existing body of literature on the generalizability of scores on the RFSV Scale with a number of non-college populations (e.g., Kalkbrenner, Goodman-Scott, & Neukrug, 2020; Kalkbrenner & Neukrug, 2018). In addition to a stringent test of internal structure validity, CFA is also a theory-testing procedure (Mvududu & Sink, 2013). Thus, our CFA results indicated that Fit, Stigma, and Value comprise a tri-dimensional theoretical model of barriers to counseling among STEM students. Consistent with the results of Kalkbrenner and Neukrug (2019), we found support for a higher-order Global Barriers to Counseling scale. The presence of a higher-order factor (see Figure 1, Model 3) indicates that the covariation between the first-order Fit, Stigma, and Value subscales comprises a meta-level latent trait. Collectively, the single-order and higher-order CFA results indicate that Fit, Stigma, and Value are discrete dimensions of an interconnected latent trait. Accordingly, CFA results provided support for the dimensionality of both the single-order RFSV model (see Figure 1, Model 1) and the higher-order model (see Figure 1, Model 3) with STEM students.
STEM students face unique risks for mental health issues, including maladaptive perfectionism as well as intense pressure to perform in harsh and competitive academic environments (Rice et al. 2015; Shapiro & Sax, 2011). These unique risk factors coupled with STEM students’ reticence to seek counseling (Kalkbrenner, James, & Pérez-Rojas, 2022) created a need for a screening tool for appraising why STEM students might avoid accessing counseling services. The results of the CFA and HCFA in the present study begin to address the gap in the literature regarding the lack of a screening tool with validated scores for appraising barriers to counseling among STEM students. Our CFA and HCFA results suggest that college counselors can use the RFSV Scale as one way to understand why STEM students on their campus are reluctant to access counseling services.
Consistent with the findings of Kalkbrenner and Neukrug (2019), we found statistically significant differences in peer-to-peer referrals and demographic differences in STEM students’ scores on the Value barrier. Specifically, increases in STEM students’ belief in the value of attending counseling were associated with significant increases in the odds of making one or more peer referrals to the counseling center, as indicated by the moderate effect size of the finding. It appears that STEM students’ attendance in personal counseling increases their propensity for recommending counseling to their peers. Similar to Kalkbrenner and Neukrug (2018), tests of group demographic differences revealed that STEM students in the present study with a help-seeking history were less sensitive to the Value barrier than STEM students without a help-seeking history. These findings indicate that attendance in counseling might enhance STEM students’ belief that the effort required to attend counseling is worth the benefits. Perhaps experiencing counseling firsthand increases STEM students’ belief in the value of counseling as well as their disposition to refer a peer to counseling. This finding has particularly important implications, as STEM students are a distinct college-based population with unique mental health needs who tend to utilize mental health support services at lower rates than non-STEM students (Kalkbrenner, James, & Pérez-Rojas, 2022; Rice et al., 2015; Shapiro & Sax, 2011). In particular, our results suggest that STEM students who access counseling services usually see value in the process. STEM students’ general attitudes about counseling might become more positive if more and more STEM students participate in counseling.
Also, consistent with the findings of Kalkbrenner and Neukrug (2018), we found demographic differences in STEM students’ scores on the Value barrier by gender identity, with males attributing less value to attending counseling than females. Macro- and micro-systemic gender role forces tend to contribute to men’s reticence to seek counseling (Neukrug et al., 2013). These forces might be intensified among male STEM students considering the intersectionality between gender roles and the high-pressure environment in STEM majors to not show vulnerability (Lipson et al., 2016; Neukrug et al., 2013). Specifically, gender-role pressures to avoid showing vulnerability coupled with a high-pressure academic environment might make male STEM students especially reluctant to seek counseling. Men are also less likely than women to recognize and seek treatment for mental health issues (Kalkbrenner & Neukrug 2018; Neukrug et al., 2013). Thus, it is also possible that male STEM students are less likely to recognize mental distress as a potentially serious health issue, which contributes to them placing less value on the benefits of counseling when compared to their female counterparts. Future research is needed to test these possible explanations for this finding.
Implications
The findings of this study have a number of implications for professional counselors who work in college settings. The CFA and HCFA results extend the psychometric properties of the RFSV Scale to STEM students (RQ1), which is an important contribution to the measurement literature, as the scale offers professional counselors a brief screening tool that usually takes 10 minutes or less to complete. The RFSV Scale can be administered at the systemic level (e.g., all STEM students at a university). Tests of internal structure reveal support for a three-dimensional RFSV model (see Figure 1, Model 1) as well as a higher-order model (see Figure 1, Model 3) with STEM students. Accordingly, professional counselors can administer and score one or both RFSV models depending on their mental health screening goals. The Global Barriers to Counseling scale might have utility for college counselors who are aiming to gather baseline information about STEM students’ general reticence to seek counseling. The three-dimensional model can provide more specific information (Fit, Stigma, and/or Value) about the reasons why STEM students on a particular campus are reluctant to seek counseling.
Our results reveal that increases in STEM students’ scores on the Value subscale were associated with a noteworthy increase in the odds of making a peer-to-peer referral to the counseling center. This finding coupled with STEM students’ vulnerability to mental distress (Daker et al., 2021; Kalkbrenner, James, & Pérez-Rojas, 2022; Lipson et al., 2016; Shapiro & Sax, 2011) suggests that peer-to-peer referrals to mental health support services might be more important than ever before in connecting STEM students in mental distress to support services. Professional counselors who work in college settings can administer the RFSV Scale to STEM students and use the results as one method of informing the content of peer-to-peer mental health support initiatives. If, for example, STEM students on a particular campus score higher on the Value subscale (higher scores denote less value toward counseling), there might be utility in including information about the many benefits of counseling in peer-to-peer outreach initiatives for STEM students. Specifically, it might be beneficial to discuss both the academic and personal benefits associated with attending counseling. For groups of STEM students who score higher on the Stigma scale, college counselors might take a strengths-based perspective by discussing how attending counseling takes courage and strength.
College counselors and student affairs officials can reach STEM students by partnering with STEM faculty and administrators to attend STEM orientations and classes that are held in large lecture halls. College counselors may build relationships with department heads and program directors of STEM programs through sharing empirical evidence on STEM students’ unique mental health needs and their reticence to access mental health support services (Kalkbrenner, James, & Pérez-Rojas, 2022; Lipson et al., 2016; Shapiro & Sax, 2011). College counselors might also discuss how increases in STEM students’ mental health is associated with greater retention and academic success, which are key values in STEM programs (Daker et al., 2021; Lockard et al., 2019; Meaders et al., 2020; Muenks et al., 2020). As buy-in from STEM department heads and program directors increases, there might be utility in professional counselors regularly making presentations and facilitating discussions about mental health and the benefits of attending counseling during new STEM student orientations. The content of these presentations can be based on the extant literature regarding the socio-personal factors that can place STEM students at risk for mental distress—for example, maladaptive perfectionism (Rice et al., 2015), high-pressure academic environments (Shapiro & Sax, 2011), and difficulty recognizing warning signs for mental distress (Kalkbrenner, James, & Pérez-Rojas, 2022). Once STEM students learn about these socio-personal factors, the presentation content can shift to psychoeducation about the utility of counseling for improving both personal and academic outcomes (Lockard et al., 2019).
The RFSV Scale can also be administered on more targeted levels, for example, to specific groups of STEM students who might be particularly vulnerable to mental health distress. There might be utility in administering the RFSV Scale to male STEM students considering that we found male STEM students were more sensitive to the Value barrier than female STEM students. College counselors can use the RFSV results to identify specific barriers (e.g., Value) that might be making STEM students on their campus unlikely to access counseling services. Such results can be used to inform thes curriculum of mental health programming (e.g., peer-to-peer support initiatives). When working with male STEM students, college counselors might consider the intersectionality of academic pressure (Lipson et al., 2016) and gender-role–based mental health stressors (Neukrug et al., 2013) they might be facing. In all likelihood, considering the intersectionality between these socio-personal factors will help college counselors address their clients’ presenting concerns holistically.
Limitations and Future Research
The methodological limitations of this research should be reviewed when considering the implications of the results. The preset data were collected from STEM students in three different cities located in the Southwestern United States; however, results might not generalize to STEM students in other geographical locations. Future researchers can validate RFSV scores with national and international samples of STEM students. Moreover, the findings of cross-sectional research designs are correlational, which prevents researchers from drawing conclusions regarding cause-and-effect. Now that STEM students’ scores on the RFSV Scale are validated, future investigators can extend this line of inquiry by conducting outcome research on the effectiveness of interventions geared toward promoting the utilization of mental health support services among STEM students.
Although factor analytic results in the present study were promising, STEM students are not a homogenous group. To this end, future investigators can extend this line of research by conducting factorial invariance testing to examine the psychometric equivalence of RFSV scores across subgroups of STEM students. As just one example, past investigators (e.g., Shapiro & Sax, 2011) found differences in STEM students’ mental health by gender identity. Relatedly, our results did not reveal demographic differences by race/ethnicity in STEM students’ vulnerability to barriers to counseling. However, we used a dummy-coding procedure to create racial/ethnic identity comparison groups (Latinx, White, or other ethnicity) that were large enough for statistical analyses. Clustering participants with racial/ethnic identities other than White or Latinx into one group might have masked significant findings within the other race/ethnicity group. It is also possible that some participants identified as White and Latinx, as White is a racial category and Latinx is an ethnic category. Future researchers should examine potential disparities in barriers to counseling among more racially and ethnically diverse samples of STEM students. In an extension of the extant literature on samples of primarily male STEM students, the present study included notably more (> 50%) female STEM students when compared to a national demographic profile of STEM students (NCES, 2020). However, the findings of the present study might not generalize to STEM students with gender identities that extend beyond only male or female. Accordingly, future researchers can test the invariance of RFSV scores with more gender-diverse samples.
The findings of the CFA and HCFA in the present study supported Fit, Stigma, and Value as barriers to counseling among STEM students. However, the deductive nature of quantitative research does not capture the nuances of participants’ lived experiences. One way that future investigators can extend this line of research is through qualitative investigations of STEM students’ attitudes and values about seeking counseling services. Qualitative results might reveal important nuances and insights into STEM students’ propensity to access mental health support services.
Conclusion
To the best of our knowledge, the present investigation is the first to establish the psychometric properties of a barriers to counseling tool with STEM students. The results represent an important contribution to the measurement literature, as confirming the internal structure of test scores on an existing measure with a previously untested population is a vital step in demonstrating construct validity. We also found that decreases in STEM students’ reticence to seek counseling was predictive of statistically significant increases in the odds of making a peer referral to the counseling center. In addition, results revealed demographic differences in barriers to counseling among STEM students by gender and help-seeking history. Collectively, our findings suggest that professional counselors who work in college settings can use the RFSV Scale as one way to support STEM college student mental health by identifying why STEM students might be reticent to access counseling services. Supporting STEM students’ mental health has implications for increasing their retention rates, completion rates, and overall psychological well-being.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
Al-Maraira, O. A., & Shennaq, S. Z. (2021). Investigation of depression, anxiety and stress levels of health-care students during COVID-19 pandemic. Mental Health Review Journal, 26(2), 113–127.
https://doi.org/10.1108/MHRJ-10-2020-0070
Bryan, A. E. B., & Arkowitz, H. (2015). Meta-analysis of the effects of peer-administered psychosocial interventions on symptoms of depression. American Journal of Community Psychology, 55(3–4), 455–471. https://doi.org/10.1007/s10464-015-9718-y
Byrom, N. (2018). An evaluation of a peer support intervention for student mental health. Journal of Mental Health, 27(3), 240–246. https://doi.org/10.1080/09638237.2018.1437605
Caporale-Berkowitz, N. A. (2022). Let’s teach peer support skills to all college students: Here’s how and why. Journal of American College Health, 70(7), 1921–1925. https://doi.org/10.1080/07448481.2020.1841775
Daker, R. J., Gattas, S. U., Sokolowski, H. M., Green, A. E., & Lyons, I. M. (2021). First-year students’ math anxiety predicts STEM avoidance and underperformance throughout university, independently of math ability. NPJ Science of Learning, 6(1), Article 17. https://doi.org/10.1038/s41539-021-00095-7
Dimitrov, D. M. (2012). Statistical methods for validation of assessment scale data in counseling and related fields. American Counseling Association.
Hong, V., Busby, D. R., O’Chel, S., & King, C. A. (2022). University students presenting for psychiatric emergency services: Socio-demographic and clinical factors related to service utilization and suicide risk. Journal of American College Health, 70(3), 773–782. https://doi.org/10.1080/07448481.2020.1764004
Kalkbrenner, M. T., Goodman-Scott, E., & Neukrug, E. S. (2020). Validation of high school students’ scores on the Revised Fit, Stigma, and Value Scale: Implications for school counseling screening. Professional School Counseling, 23(1). https://doi.org/10.1177/2156759X20912750
Kalkbrenner, M. T., James, C., & Pérez-Rojas, A. E. (2022). College students’ awareness of mental disorders and resources: Comparison across academic disciplines. Journal of College Student Psychotherapy, 36(2), 113–134. https://doi.org/10.1080/87568225.2020.1791774
Kalkbrenner, M. T., Lopez, A. L., & Gibbs, J. R. (2020). Establishing the initial validity of the REDFLAGS Model: Implications for college counselors. Journal of College Counseling, 23(2), 98–112. https://doi.org/10.1002/jocc.12152
Kalkbrenner, M. T., & Neukrug, E. S. (2018). Identifying barriers to attendance in counseling among adults in the United States: Confirming the factor structure of the Revised Fit, Stigma, & Value Scale. The Professional Counselor, 8(4), 299–313. https://doi.org/10.15241/mtk.8.4.299
Kalkbrenner, M. T., & Neukrug, E. S. (2019). The utility of the Revised Fit, Stigma, and Value Scale with counselor trainees: Implications for enhancing clinical supervision. The Clinical Supervisor, 38(2), 262–280. https://doi.org/10.1080/07325223.2019.1634665
Kalkbrenner, M. T., Neukrug, E. S., & Esquivel, L. E. (2022). Mental health literacy screening of students in Hispanic Serving Institutions. Journal of Counseling & Development, 100(3), 319–329. https://doi.org/10.1002/jcad.12428
Kalkbrenner, M. T., Neukrug, E. S., & Griffith, S. A. (2019). Appraising counselor attendance in counseling: The validation and application of the Revised Fit, Stigma, and Value Scale. Journal of Mental Health Counseling, 41(1), 21–35. https://doi.org/10.17744/mehc.41.1.03
Kalkbrenner, M. T., Sink, C. A., & Smith, J. L. (2020). Mental health literacy and peer-to-peer counseling referrals among community college students. Journal of Counseling & Development, 98(2), 172–182. https://doi.org/10.1002/jcad.12311
Kivlighan, D. M., III, Schreier, B. A., Gates, C., Hong, J. E., Corkery, J. M., Anderson, C. L., & Keeton, P. M. (2021). The role of mental health counseling in college students’ academic success: An interrupted time series analysis. Journal of Counseling Psychology, 68(5), 562–570. https://doi.org/10.1037/cou0000534
Lipson, S. K., Zhou, S., Wagner, B., Beck, K., & Eisenberg, D. (2016). Major differences: Variations in undergraduate and graduate student mental health and treatment utilization across academic disciplines. Journal of College Student Psychotherapy, 30(1), 23–41. https://doi.org/10.1080/87568225.2016.1105657
Lockard, A. J., Hayes, J. A., Locke, B. D., Bieschke, K. J., & Castonguay, L. G. (2019). Helping those who help themselves: Does counseling enhance retention? Journal of Counseling & Development, 97(2), 128–139. https://doi.org/10.1002/jcad.12244
McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412–433. https://doi.org/10.1037/met0000144
Meaders, C. L., Lane, A. K., Morozov, A. I., Shuman, J. K., Toth, E. S., Stains, M., Stetzer, M. R., Vinson, E., Couch, B. A., & Smith, M. K. (2020). Undergraduate student concerns in introductory STEM courses: What they are, how they change, and what influences them. Journal for STEM Education Research, 3(2), 195–216. https://doi.org/10.1007/s41979-020-00031-1
Muenks, K., Canning, E. A., LaCosse, J., Green, D. J., Zirkel, S., Garcia, J. A., & Murphy, M. C. (2020). Does my professor think my ability can change? Students’ perceptions of their STEM professors’ mindset beliefs predict their psychological vulnerability, engagement, and performance in class. Journal of Experimental Psychology, 149(11), 2119–2144. https://doi.org/10.1037/xge0000763
Mvududu, N. H., & Sink, C. A. (2013). Factor analysis in counseling research and practice. Counseling Outcome Research and Evaluation, 4(2), 75–98. https://doi.org/10.1177/2150137813494766
National Center for Educational Statistics. (2020). Science, Technology, Engineering, and Mathematics (STEM) education, by gender. https://nces.ed.gov/fastfacts/display.asp?id=899
Neukrug, E., Britton, B. S., & Crews, R. C. (2013). Common health-related concerns of men: Implications for counselors. Journal of Counseling & Development, 91(4), 390–397. https://doi.org/10.1002/j.1556-6676.2013.00109
Neukrug, E. S., Kalkbrenner, M. T., & Griffith, S.-A. M. (2017). Barriers to counseling among human service professionals: The development and validation of the Fit, Stigma, & Value (FSV) Scale. Journal of Human Services, 37(1), 27–40. https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1016&context=chs_pubs
Olson, K., Koscak, G., Foroudi, P., Mitalas, E., & Noble, L. (2016). Recognize and refer: Engaging the Greek community in active bystander training. College Student Affairs Journal, 34(3), 48–61. https://doi.org/10.1353/csj.2016.0018
Rice, K. G., Ray, M. E., Davis, D. E., DeBlaere, C., & Ashby, J. S. (2015). Perfectionism and longitudinal patterns of stress for STEM majors: Implications for academic performance. Journal of Counseling Psychology, 62(4), 718–731. https://doi.org/10.1037/cou0000097
Rincon, B. E., & George-Jackson, C. E. (2016). STEM intervention programs: Funding practices and challenges. Studies in Higher Education, 41(3), 429–444. https://doi.org/10.1080/03075079.2014.927845
Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323–338. https://doi.org/10.3200/JOER.99.6.323-338
Schwitzer, A. M., Moss, C. B., Pribesh, S. L., St. John, D. J., Burnett, D. D., Thompson, L. H., & Foss, J. J. (2018). Students with mental health needs: College counseling experiences and academic success. Journal of College Student Development, 59(1), 3–20. https://doi.org/10.1353/csd.2018.0001
Shapiro, C. A., & Sax, L. J. (2011). Major selection and persistence for women in STEM. New Directions for Institutional Research, 2011(152), 5–18. https://doi.org/10.1002/ir.404
U.S. Department of Education. (2020). Science, technology, engineering, and math, including computer science. https://www.ed.gov/stem
Warne, R. (2014). A primer on multivariate analysis of variance (MANOVA) for behavioral scientists. Practical Assessment, Research, and Evaluation, 19, 1–10. https://doi.org/10.7275/sm63-7h70
Wawrzynski, M. R., LoConte, C. L., & Straker, E. J. (2011). Learning outcomes for peer educators: The national survey on peer education. Emerging Issues and Practices in Peer Education, 2011(133), 17–27.
https://doi.org/10.1002/ss.381
Woodhead, E. L., Chin-Newman, C., Spink, K., Hoang, M., & Smith, S. A. (2021). College students’ disclosure of mental health problems on campus. Journal of American College Health, 69(7), 734–741.
https://doi.org/10.1080/07448481.2019.170653
Michael T. Kalkbrenner, PhD, NCC, is an associate professor at New Mexico State University. Gabriella Miceli, MS, LPC-A, is a doctoral student at New Mexico State University. Correspondence may be addressed to Michael T. Kalkbrenner, 1780 E. University Ave., Las Cruces, NM 88003, mkalk001@nmsu.edu.
Sep 21, 2022 | Book Reviews
by Julius A. Austin and Jude T. Austin II
This is the book I wish I had when I started graduate school.
I thoroughly enjoyed this book. The authors of this book present the material in an authentic voice that makes the reader feel accepted and understood at whatever stage of the process they are at in the counseling program. The authors readily present their own fears and expectations when they began graduate school. They are humble and honest about things they wish they had done differently, and they embody a calm and considerate approach with a welcome addition of humor.
The authors begin with an informative section that touches on all the normal concerns and fears you may have as a student just starting a counseling program, and the book progresses through every stage of a counseling program from your first year all the way through graduation and your first job. The authors touch on core concepts in each section, common fears, and resources for success. They even provide perspective on pursuing a doctoral degree and skills for choosing where you would like to start your first job after graduation.
The book’s structure makes it flow easily from chapter to chapter, giving light to the gradual progression of course work and your own personal development and self-care. In each chapter, the authors blend in voices and stories from people currently in the profession. Sharing examples, struggles, development, and successes helps to give credibility to the process and normalize expectations and concerns.
The authors also provide a section on emotional maturity in the book. I found this section to be a welcome addition in that it defines several examples of emotional immaturity and characteristics of emotionally mature students. This section provided insight into emotional stability, emotional intelligence, and the self-awareness that is beneficial to success in a counseling program.
In addition to this, the authors also provide a section on dealing with setbacks and managing conflicts. Both sections contain valuable information to consider, and I don’t believe these topics are discussed frequently enough without judgement in other texts. Setbacks and conflicts are bound to happen in any setting. Normalizing this and looking at skills and reflections to approach these conflicts are a welcome addition to strengthening the effectiveness of this text.
Overall, I think this book is valuable, and students should consider reading this book in full when considering entering into a counseling program. This book would have also been beneficial as an assigned text during my first semester of graduate school. It is an easy and informative read that does an excellent job of reflecting on all those questions that either I was too scared to ask, only asked in my small group of equally confused classmates after class, or quite honestly, didn’t even have enough information to know I needed to ask.
This book gives amazing insight into not just the information about a counseling program, but also manages to grasp how it changes you as a person and how it changes your perspectives, your family dynamics, and your own value system. It normalizes the stress of a graduate program but also highlights the journey and the beauty of those outcomes.
Austin, J. A., & Austin, J. T., II (2020). Surviving and thriving in your counseling program. American Counseling Association.
Reviewed by: Megan Ries, NCC
Sep 19, 2022 | Book Reviews
by Samuel T. Gladding
Dr. Samuel T. Gladding’s third edition of Becoming a Counselor: The Light, the Bright, and the Serious offers a genuine and insightful reflection of his experiences both as an individual and as a counselor.
In Becoming a Counselor, Dr. Gladding (PhD, NCC, CCMHC, LPC) describes his experiences in counseling through a series of vignettes. These brief but comprehensive stories are cohesively told through his personal lens as a counseling professional. These vignettes range from Dr. Gladding’s impressions from his experiences growing up in Decatur, Georgia, to teaching within a counseling program, to the COVID-19 pandemic in 2020.
The book is divided into 17 sections, which contain a series of vignettes and stories pertaining to the section’s specific theme of counseling and Dr. Gladding’s experiences. Each section begins with a poem, composed by Dr. Gladding, which gives a brief glimpse into what the following section will entail. The third edition expands on previous editions to include an additional 35 vignettes, as well as an introduction that explains Dr. Gladding’s personal worldview. In this introduction, Dr. Gladding specifically acknowledges his own biases and experiences that shaped him as a counselor, providing crucial self-disclosure prior to delving into his personal experiences.
Limitations for Becoming a Counselor include the highly personal nature of the majority of these vignettes. Although the themes established within this volume assist with generalizing this knowledge outside of Dr. Gladding’s experiences, this book tends to take an autobiographical tone, rather than an educational one.
Nonetheless, fellow mental health professionals can use this book as a useful tool to guide their own journey through professional development and leadership. Dr. Gladding’s conversational tone guides the reader toward a deeper understanding of seemingly superficial events.
The primary strength of this book is within the universality of its themes. Through interweaving brief stories about his experiences, Dr. Gladding shares both ordeals and successes in vignettes that can easily be incorporated into a class lecture. Practicum or internship courses would doubtlessly find short stories detailing Dr. Gladding’s experiences as useful material to discuss within the classroom. Another strength of this book includes its organization of seemingly enormous and intimidating topics, such as finding success in academia, and then taking the teeth from these topics by including fun, good-humored titles for the individual vignettes. Although many books are professional in nature, it is rare to find one that also carries a sense of humor. However, Dr. Gladding does not shy away from the more serious topics of counseling.
If you read this book, you will undoubtedly find it difficult to put it down. This book reads more as a story than a text at times, which will more than likely lead to you finishing it by the end of the day.
Although not entirely educational in nature, Becoming a Counselor carries lessons from an autobiographical standpoint that many counselors can value. This edition was one of Dr. Gladding’s final works prior to his passing in December 2021. Within the latest edition of his book, Dr. Gladding encourages the reader to carry a level of levity, insight, and seriousness as both a counselor and an individual through their own experiences.
Gladding, S. T. (2021). Becoming a counselor: The light, the bright, and the serious (3rd ed.). American Counseling Association Foundation.
Reviewed by: Katie Michaels, MA, NCC, ALC