Counseling Self-Efficacy, Quality of Services and Knowledge of Evidence-Based Practices in School Mental Health

Bryn E. Schiele, Mark D. Weist, Eric A. Youngstrom, Sharon H. Stephan, Nancy A. Lever

Counseling self-efficacy (CSE), defined as one’s beliefs about his or her ability to effectively counsel a client, is an important precursor of effective clinical practice. While research has explored the association of CSE with variables such as counselor training, aptitude and level of experience, little attention has been paid to CSE among school mental health (SMH) practitioners. This study examined the influence of quality training (involving quality assessment and improvement, modular evidence-based practices, and family engagement/empowerment) versus peer support and supervision on CSE in SMH practitioners, and the relationship between CSE and practice-related variables. ANCOVA indicated similar mean CSE changes for counselors receiving the quality training versus peer support. Regression analyses indicated that regardless of condition, postintervention CSE scores significantly predicted quality of practice, knowledge of evidence-based practices (EBP) and use of EBP specific to treating depression. Results emphasize the importance of CSE in effective practice and the need to consider mechanisms to enhance CSE among SMH clinicians.

 

Keywords: self-efficacy, school mental health, evidence-based practices, counselor training, depression

 

 

There are major gaps between the mental health needs of children and adolescents and the availability of effective services to meet such needs (Burns et al., 1995; Kataoka, Zhang, & Wells, 2002). This recognition is fueling efforts to improve mental health services for youth in schools (Mellin, 2009; Stephan, Weist, Kataoka, Adelsheim, & Mills, 2007). At least 20% of all youth have significant mental health needs, with roughly 5% experiencing substantial functional impairment (Leaf, Schultz, Kiser, & Pruitt, 2003). Further, less than one third of children with such mental health needs receive any services at all.

 

The President’s New Freedom Commission on Mental Health (2003) documented the position of schools as a point of contact and universal natural setting for youth and families, recognizing schools as a key factor in the transformation of child and adolescent mental health services (Stephan et al., 2007). In the past 2 decades, there has been a significant push for full-service schools that expand beyond a sole focus on education, and employ community mental health practitioners to respond to the emotional and behavioral needs of students (Conwill, 2003; Dryfoos, 1993; Kronick, 2000). The education sector is the most common provider of mental health services for children and adolescents (Farmer, Burns, Phillips, Angold, & Costello, 2003), with 70%–80% of youth who receive any mental health services obtaining them at school (Burns et al., 1995; Rones & Hoagwood, 2000). Therefore, attention must be paid to the quantity, quality and effectiveness of school mental health (SMH) services.

 

School Mental Health

 

In recent years, SMH programs, supported by both school staff (e.g., school psychologists, social workers, counselors) and school-based community mental health clinicians, have emerged as a promising approach to the provision of mental health services for students and families (Weist, Evans, & Lever, 2003). The growth of these programs has facilitated investigation of what constitutes high-quality SMH service provision (Nabors, Reynolds, & Weist, 2000; Weist et al., 2005). This work has been supported and furthered by the Center for School Mental Health, a federally funded technical assistance and training program to advance SMH programs within the United States. In collaboration with other SMH centers (e.g., UCLA Center for Mental Health in Schools) and interdisciplinary networks focused on school health, consensus was reached to develop a guiding framework defining best practices in SMH (Weist et al., 2005). These principles call for appropriate service provision for children and families, implementation of interventions to meet school and student needs, and coordination of mental health programs in the school with related community resources, among other things. For further explication of the framework and its development, see Weist et al. (2005).

 

Simultaneously, research developments through the Center for School Mental Health facilitated implementation of modular evidence-based practices (EBP; see Chorpita, Becker & Daleiden, 2007; Chorpita & Daleiden, 2009). A modular approach for intervention involves training clinicians in core, effective strategies for disorders frequently encountered in children (e.g., attention-deficit/hyperactivity disorder [ADHD], anxiety, depression, disruptive behavior disorders [DBD]). This approach enables individualized, flexible implementation of evidence-based strategies without the constraints of a manualized approach (Curry & Reinecke, 2003). The third guiding component to enhance quality in SMH practices is development of strategies to effectively engage and empower families (see Hoagwood, 2005).

 

Despite the development of such a framework, SMH clinicians often struggle to implement high-quality, evidence-based services (Evans et al., 2003; Evans & Weist, 2004). These clinicians are constrained by a lack of sufficient time, training in EBP, appropriate supervision, and internal and external resources (Shernoff, Kratchowill & Stoiber, 2003). For instance, a survey by Walrath et al. (2004) of Baltimore SMH clinicians suggested that the ratio of clinicians to students was 1:250, and in order to meet the mental health needs of students, clinicians would have to increase clinical hours by 79 per week to remediate student difficulties. Additionally, the school environment is often characterized as chaotic, hectic and crisis-driven (Langley, Nadeem, Kataoka, Stein, & Jaycox, 2010), with SMH clinicians citing difficulties implementing EBP given the schedules of students. As a result of the challenges limiting use of EBP in daily SMH practice, researchers are now evaluating the influences on successful delivery of EBP in schools, including the personal qualities of SMH professionals (e.g., attitudes, beliefs, skills, training; Berger, 2013), as well as environmental factors (e.g., school administrative support, access to community resources, sufficient space for practice; Powers, Edwards, Blackman & Wegmann, 2013) that may predict high-quality services (see Weist et al., 2014).

 

Previous work examining factors related to the provision of evidence-based SMH services by SMH clinicians suggested that the highest-rated facilitators of effective SMH practice were personal characteristics (e.g., desire to deliver mental health services), attitudes and openness toward use of EBP, and adequate training (Beidas et al., 2012; Langley et al., 2010). Alternatively, SMH clinicians reported a number of administrative, school site and personal barriers as significant obstacles to appropriate service delivery; such barriers include lack of sufficient training, overwhelming caseload, job burnout and personal mental health difficulties (Langley et al., 2010; Suldo, Friedrich, & Michalowski, 2010).

 

While researchers have evaluated the influence of SMH provider personal characteristics in relation to the delivery of high-quality SMH services, little attention has been paid to the importance of counseling self-efficacy (CSE). CSE is widely accepted as an important precursor to competent clinical practice (Kozina, Grabovari, De Stefano, & Drapeau, 2010). Further, building CSE is considered an important strategy in active learning when providing training in evidence-based therapies (Beidas & Kendall, 2010), and CSE in EBP is believed to be essential to implementation (Aarons, 2005). However, researchers have yet to systematically include measures of CSE in studies of EBP utilization by SMH providers.

 

Self-Efficacy

 

     Social-cognitive theory and its central construct, self-efficacy, have received much attention in the psychological literature, with more than 10,000 studies including these as central variables in the past 25 years (Judge, Jackson, Shaw, Scott, & Rich, 2007). Self-efficacy is defined as an individual’s beliefs about his or her ability to achieve desired levels of performance (Bandura, 1994), and it plays a key role in the initiation and maintenance of human behavior (Iannelli, 2000). Given the influence of self-efficacy expectancies on performance, researchers have evaluated how self-efficacy impacts a variety of action-related domains, including career selection (e.g., Branch & Lichtenberg, 1987; Zeldin, Britner, & Pajares, 2008), health-behavior change (e.g., Ramo, Prochaska, & Myers, 2010; Sharpe et al., 2008) and work-related performance (e.g., Judge et al., 2007; Stajkovic & Luthans, 1998). Specific to the mental health field, previous investigations have focused on how self-efficacy is related to counseling performance.

 

Counseling Self-Efficacy

The construct of CSE is defined as an individual’s beliefs about his or her ability to effectively counsel a client in the near future (Larson & Daniels, 1998). Studies of the structure and influence of CSE among a variety of mental health professionals, including counseling trainees, master’s-level counselors, psychologists, school counselors and students from related professions (e.g., clergy, medicine) have yielded mixed findings. Social desirability, counselor personality, aptitude, achievement (Larson et al., 1992) and counselor age (Watson, 2012) have shown small to moderate associations with CSE. CSE also is related to external factors, including the perceived and objective work environment, supervisor characteristics, and level or quality of supervision (Larson & Daniels, 1998).

 

However, the relationship of CSE with level of training is unclear. For the most part, CSE is stronger for individuals with at least some counseling experience than for those with none (Melchert, Hays, Wiljanen, & Kolocek, 1996; Tang et al., 2004). While the amount of training and education obtained have been reported as statistically significant predictors of degree of CSE (Larson & Daniels, 1998; Melchert et al., 1996), more recent work has not supported the existence of such predictive relationships (Tang et al., 2004). It also has been suggested that once a counselor has obtained advanced graduate training beyond the master’s level, the influence of experience on CSE becomes rather minimal (Larson, Cardwell, & Majors, 1996; Melchert et al., 1996; Sutton & Fall, 1995).

 

Some work has been done to evaluate interventions aimed at enhancing CSE by utilizing the four primary sources of self-efficacy, as defined by Bandura (1977; i.e., mastery, modeling, social persuasion, affective arousal). In two studies involving undergraduate recreation students, Munson, Zoerink & Stadulis (1986) found that modeling with role-play and visual imagery served to enhance CSE greater than a wait-list control group. Larson et al. (1999) attempted to extend these findings utilizing a sample of practicum counseling trainees, and found that self-evaluation of success in the session moderated the level of CSE postintervention (Larson et al., 1999), with perception of success significantly impacting the potency of the role-play scenarios. The same effect was not found for individuals in the videotape condition.

 

In addition to impacting clinician performance, CSE has been reported to indirectly impact positive client outcome (Urbani et al., 2002); for example, CSE has been associated with more positive outcomes for clients, more positive self-evaluations and fewer anxieties regarding counseling performance (Larson & Daniels, 1998). Thus, increasing CSE, which decreases clinicians’ anxiety, is important for client outcomes, as anxiety is reported to decrease level of clinical judgment and performance (Urbani et al., 2002). While there is some evidence that CSE is influential for client outcomes, minimal work has been done to evaluate this relationship.

 

CSE has been evaluated in a variety of samples; however, little work has been done to evaluate CSE of SMH practitioners and the factors that play into its development. Additionally, although some investigation has been conducted on factors that impact SMH practitioners’ abilities and performance, CSE is an element that seldom has been studied.

 

The current study aimed to examine the influence of a quality assessment and improvement (QAI) intervention on CSE in SMH practitioners, as well as the importance of CSE in regard to practice-related domains. The primary question of interest was, Does an intervention focused on QAI (target) result in higher levels of CSE than a comparison condition involving a focus on professional wellness (W) and supervision (control)? We investigated the influence of differential quality training and supervision on one’s level of CSE by comparing postintervention CSE scores between each condition after evaluating preintervention equivalency of CSE levels. Thus, we hypothesized that long-term exposure to the QAI intervention, family engagement/empowerment and modular EBP would result in significantly higher reports of CSE from those exposed to the QAI intervention than those exposed to the comparison intervention. Based on previous research, it is possible that specific counselor characteristics (e.g., age, experience) would predict CSE, such that individuals who are older and have more experience counseling children and adolescents would have higher CSE (Melchert et al., 1996; Tang et al., 2004; Watson, 2012). Thus, when evaluating training effects, these variables were included as covariates in the analysis of the relation between CSE and training.

 

Secondarily, this study aimed to evaluate the relation of professional experiences to CSE following exposure to the intervention. For this aim, the research question was, Does postintervention level of CSE predict quality of self-reported SMH practice, as well as knowledge and use of EBP? We hypothesized that level of CSE would predict quality of SMH practice, as well as attitude toward, knowledge and use of EBP regardless of intervention condition.

 

Method

 

This article stems from a larger previous evaluation of a framework to enhance the quality of SMH (Weist et al., 2009), funded by the National Institute of Mental Health (#1R01MH71015; 2003-2007; M. Weist, PI). As a part of a 12-year research program on quality and EBP in SMH, researchers conducted a two-year, multisite (from community agencies in Delaware, Maryland, Texas) randomized controlled trial of a framework for high-quality and effective practice in SMH (EBP, family engagement/empowerment and systematic QAI) as compared to an enhanced treatment as usual condition (focused on personal and school staff wellness). Only the methods pertaining to the aims of the current study have been included here (see Stephan et al., 2012; Weist et al., 2009 for more comprehensive descriptions).

 

Participants

A sample of 72 SMH clinicians (i.e., clinicians employed by community mental health centers to provide clinical services within the school system) from the three SMH sites participated for the duration of the study (2004–2006), and provided complete data for all study measures via self-report. All clinicians were employed by community-based agencies with an established history of providing SMH prevention and intervention services to elementary, middle and high school students in both general and special education programs.

 

A total of 91 clinicians participated over the course of the study, with a sample size of 64 in Year 1 and 66 in Year 2, with 27 clinicians involved only in Year 2. Out of the Year 1 sample (35 QAI and 29 W), 24 participants did not continue into Year 2 (13 QAI and 11 W). Dropout showed no association with nonparticipation and did not differ between conditions (37% QAI versus 38% comparison dropout rate). Investigations in this particular study focused on individuals who had completed at least one year of the study and had submitted pre- and postintervention measures. The 72 participants were predominantly female (61 women, 11 men) and were 36 years old on average (SD = 11.03). In terms of race and ethnicity, participants identified as Caucasian (55%), African American (26%), Hispanic (18%) and Other (1%). Participants reported the following educational levels: graduate degree (83%), some graduate coursework (13%), bachelor’s degree (3%), and some college (1%).  In terms of experience, clinicians had roughly 6 years of prior experience and had worked for their current agency for 3 years on average. The obtained sample is reflective of SMH practitioners throughout the United States (Lewis, Truscott, & Volker, 2008).

 

Measures

 

     Counseling self-efficacy. Participants’ CSE was measured using the Counselor Self-Efficacy Scale (Sutton & Fall, 1995). The measure was designed to be used with school counselors, and was created using a sample of public school counselors in Maine. Sutton and Fall modified a teacher efficacy scale (Gibson & Dembo, 1984), resulting in a 33-item measure that reflected CSE and outcome expectancies. Results of a principal-component factor analysis demonstrated initial construct validity, indicating a three-factor structure, with the internal consistency of these three factors reported as adequate (.67–.75). However, the structure of the measure has received criticism, with some researchers arguing that the third factor does not measure outcome expectancies as defined by social-cognitive theory (Larson & Daniels, 1998). Thus, we made a decision to use the entire 33-item scale as a measure of overall CSE. Respondents were asked to rate each item using a 6-point Likert scale (1 = strongly disagree, 6 = strongly agree). We made slight language modifications to make the scale more applicable to the work of this sample (Weist et al., 2009); for instance, guidance program became counseling program. CSE was measured in both conditions at the beginning and end of Years 1 and 2 of the intervention program.

 

     Quality of school mental health services. The School Mental Health Quality Assessment Questionnaire (SMHQAQ) is a 40-item research-based measure developed by the investigators of the larger study to assess 10 principles for best practice in SMH (Weist et al., 2005; Weist et al., 2006), including the following: “Programs are implemented to address needs and strengthen assets for students, families, schools, and communities” and “Students, families, teachers and other important groups are actively involved in the program’s development, oversight, evaluation, and continuous improvement.”

 

At the end of Year 2, clinicians rated the degree to which each principle was present in their own practice on a 6-point Likert scale, ranging from not at all in place to fully in place. Given that results from a principle components analysis indicated that all 10 principles weighed heavily on a single strong component, analyses focused primarily on total scores of the SMHQAQ. Aside from factor analytic results, validity estimates are unavailable. Internal consistency as measured by coefficient alpha was very strong (.95).

 

     Knowledge and use of evidence-based practices. The Practice Elements Checklist (PEC) is based on the Hawaii Department of Health’s comprehensive summary of top modular EBP elements (Chorpita & Daleiden, 2007). Principal investigators of the larger study created the PEC in consultation with Bruce Chorpita of the University of California, Los Angeles, an expert in mental health technologies for children and adolescents. The PEC asks clinicians to provide ratings of the eight skills found most commonly across effective treatments for four disorder areas (ADHD, DBD, depression and anxiety). Respondents used a 6-point Likert scale to rate both current knowledge of the practice element (1= none and 6 = significant), as well as frequency of use of the element in their own practice, and frequency with which the clinician treats children whose primary presenting issue falls within one of the four disorder areas (1 = never, 6 = frequently).

 

In addition to total knowledge and total frequency subscales (scores ranging from 4–24), research staff calculated four knowledge and four frequency subscale scores (one for each disorder area) by averaging responses across practice elements for each disorder area (scores ranging from 1–6). Clinicians also obtained total PEC score by adding all subscale scores, resulting in a total score ranging from 16–92. Although this approach resulted in each item being counted twice, it also determined how total knowledge and skill usage are related to CSE, as well as skills in specific disorder areas. While internal consistencies were found to be excellent for each of the subscales, ranging from .84–.92, validity of the measure has yet to be evaluated. Clinicians completed the PEC at end of Year 2.

 

Study Design

SMH clinicians were recruited from their community agencies approximately 1 month prior to the initial staff training. After providing informed consent, clinicians completed a set of questionnaires, which included demographic information, level of current training and CSE, and were randomly assigned to the QAI intervention or the W intervention. Four training events were provided for participants in both conditions (at the beginning and end of both Years 1 and 2). During the four training events, individuals in the QAI condition received training in the three elements reviewed previously. For individuals involved in the W (i.e., comparison) condition, training events focused on general staff wellness, including stress management, coping strategies, relaxation techniques, exercise, nutrition and burnout prevention.

 

At each site, senior clinicians (i.e., licensed mental health professionals with a minimum of a master’s degree and 3 years experience in SMH) were chosen to serve as project supervisors for the condition to which they were assigned. These clinicians were not considered participants, and maintained their positions for the duration of the study. Over the course of the project, each research supervisor dedicated one day per week to the study, and was assigned a group of roughly 10 clinicians to supervise. Within the QAI condition, supervisors held weekly group meetings with small groups of five clinicians to review QAI processes and activities in their schools, as well as strategies for using the evidence base; in contrast, there was no study-related school support for staff in the W condition.

 

Results

 

Preliminary Analyses and Scaling

     Analyses were conducted using SPSS, version 20; tests of statistical significance were conducted with a Bonferroni correction (Cohen, Cohen, West, & Aiken, 2003), resulting in the use of an alpha of .0045, two-tailed. To facilitate comparisons between variables, staff utilized a scaling method known as Percentage of Maximum Possible (POMP) scores, developed by Cohen, Cohen, Aiken, & West (1999). Using this method, raw scores are transformed so that they range from zero to 100%. This type of scoring makes no assumptions about the shape of the distributions, in contrast to z scores, for which a normal distribution is assumed. POMP scores are an easily understood and interpreted metric and cumulatively lead to a basis for agreement on the size of material effects in the domain of interest (i.e., interventions to enhance quality of services and use of EBP; Cohen et al., 1999).

 

Primary Aim

     Initial analyses confirmed retreatment equivalence for the two conditions, t (72) = –.383, p = .703. For individuals in the QAI condition, preintervention CSE scores averaged at 71.9% of maximum possible (SD = .09), while those in the comparison condition averaged at 71.3% of maximum possible (SD = .08). These scores were comparable to level of CSE observed in counseling psychologists with similar amounts of prior experience (Melchert et al., 1996).

 

Correlation analyses suggested that pretreatment CSE was significantly associated with age (r = .312, p = .008), race (r = –.245, p = .029), years of counseling experience (r = .313, p = .007) and years with the agency (r = .232, p = .048). Thus, these variables were included as covariates in an analysis of covariance (ANCOVA) evaluating changes in CSE between the QAI and comparison conditions. Results suggested a nonsignificant difference in change in CSE from pre- to postintervention between conditions, F (72) = .013, p = .910. For individuals in the QAI condition, postintervention CSE scores averaged at 73.1% of maximum possible (SD = .07), and for individuals in the comparison condition, CSE scores averaged at 72.8% of maximum possible (SD = .08). Additionally, when looking across conditions, results indicated a nonsignificant difference in change in level of CSE from pre- to postintervention, F (72) = .001, p = .971. Across conditions, clinicians reported roughly similar levels of CSE at pre- and postintervention time points (72% vs. 73% of maximum possible); see Table 1.

 

 

Table 1

 

Analysis of Covariance (ANCOVA) Summary of Change in CSE

 

Source

df

  F

  p

Partial η2

CSE

1

.001

.971

.000

CSE*Condition

1

.013

.910

.000

CSE*Age

1

.281

.598

.004

CSE*Race

1

1.190

.279

.018

CSE*Years of Experience

1

.032

.859

.000

CSE*Years with Agency

1

.003

.955

.000

Error

66

 

Note. N = 72.

 

 

Secondary Aim

     To investigate the influence of level of CSE on quality and practice elements in counseling, a series of individual regressions were conducted with level of postintervention CSE as the predictor variable, and indicators of attitudes toward EBP, knowledge and use of EBP, and use of quality mental health services as the outcome variables in separate analyses.

 

Table 2 shows that level of postintervention CSE significantly predicted the following postintervention variables: SMHQAQ quality of services (R2 = .328, F [60] = 29.34, p < .001); knowledge of EBP for ADHD (R2 = .205, F [46] = 11.54, p = .001), depression (R2 = .288, F [46]= 18.17, p < .001), DBD (R2 = .236, F [46]= 13.92, p = .001) and anxiety (R2 = .201, F [46]= 10.81, p = .002); usage of EBP specific to treating depression (R2 = .301, F [46]= 19.34, p < .001); and total knowledge of EBP (R2 = .297, F [44] = 18.20, p < .001). Results further indicated that postintervention CSE was not a significant predictor of usage of EBP for ADHD (R2 = .010, F [45] = .457, p = .502), DBD (R2 = .024, F [45] = 1.100, p = .300) and anxiety (R2 = .075, F [43] = 3.487, p = .069); and total usage of EBP (R2 = .090, F [43] = 4.244, p = .045).

 

 

Table 2

 

Results of Linear Regressions Between Level of Postintervention CSE and Outcome Variables

 

Variables

Beta

       R2

  Adjusted R2

      F   

        p

SMH Quality

0.573

0.328

0.317

29.337

0.000

EBP ADHD – Knowledge

0.452

0.205

0.187

11.583

0.001

EBP ADHD – Usage

0.100

0.010

–0.012

0.457

0.502

EBP Depression – Knowledge

0.536

0.288

0.272

18.168

0.000

EBP Depression – Usage

0.548

0.301

0.285

19.337

0.000

EBP DBD – Knowledge

0.486

0.236

0.219

13.922

0.001

EBP DBD – Usage

0.154

0.024

0.002

1.100

0.300

EBP Anxiety – Knowledge

0.448

0.201

0.182

10.811

0.002

EBP Anxiety – Usage

0.274

0.075

0.053

3.487

0.069

EBP Total Knowledge

0.545

0.297

0.281

18.197

0.000

EBP Total Usage

0.300

0.900

0.069

4.244

0.045

 

Note. To control for experiment-wise error, a Bonferroni correction was used and significance was evaluated at the 0.0045 level.

 

 

Discussion

 

While there has been some previous examination of the association between training and CSE, results have been mixed (see Larson & Daniels, 1998), and no such evaluations have been conducted within the context of SMH services. The current study stemmed from a larger evaluation of a framework to enhance the quality of SMH, targeting quality service provision, EBP, and enhancement of family engagement and empowerment (see Weist et al., 2009).

 

The present study had two primary aims. The first goal was to evaluate differences in level of CSE from pre- to postintervention between two groups of SMH clinicians. We expected that those who received information, training and supervision on QAI and best practice in SMH would report higher levels of CSE postintervention than those in the W condition. The secondary aim was to evaluate whether clinician reports of postintervention CSE would serve as predictors of quality of SMH practice, as well as knowledge and use of EBP. Given the influence that clinician CSE has been found to have on practice-related variables in previous studies (see Larson & Daniels, 1998), we hypothesized that higher level of CSE would significantly predict higher quality of SMH practice, and knowledge and usage of EBP.

 

Controlling for age, race, years of experience and years with the agency, findings did not confirm the primary hypothesis. No statistically significant differences in clinician reports of CSE from pre- to postintervention were observed between the QAI and W conditions. Regarding the secondary aim, however, clinician postintervention level of CSE was found to serve as a significant predictor of quality of practice; total knowledge of EBP specific to treating ADHD, DBD, anxiety and depression; and usage of EBP specific to treating depression. Findings are consistent with previous literature suggesting that CSE levels influence performance in a number of practice-related domains (Larson & Daniels, 1998).

 

Results did not support a significant predictive relation between CSE level and usage of EBP specific to treating ADHD, DBD and anxiety. The failure to find an association may be due to evaluating level of usage of EBP across conditions due to limited power to run the analyses by condition. Results from the original study suggested that individuals in the QAI condition were more likely to use established EBP in treatment (see Weist et al., 2009). Thus, as provider characteristics including CSE (Aarons, 2005) are known to be associated with adoption of EBP, it may be that examining these associations across conditions resulted in null findings.

 

While current results did support the importance of high CSE regarding practice-related domains, there was no significant difference in level of CSE between those who received information, training and supervision in QAI; use of EBP; and family engagement and empowerment compared to those in the W condition. Findings from the current study contrast with other research that has documented improvements in CSE following targeted interventions. Previous targeted interventions to increase CSE have resulted in positive outcomes when using micro-skills training and mental practice (Munson, Stadulis, & Munson, 1986; Munson, Zoerink, & Stadulis, 1986), role-play and visual imagery (Larson et al., 1999), a prepracticum training course (Johnson, Baker, Kopala, Kiselica, & Thompson, 1989) and practicum experiences (Larson et al., 1993).

 

As a curvilinear relation is reported to exist between CSE and level of training (Larson et al., 1996; Sutton & Fall, 1995), it may be that the amount of previous training and experience of this sample of clinicians, being postlicensure, was such that the unique experiences gained through the QAI and W conditions in the current study had a minimal impact on overall CSE. Many prior studies utilized students untrained in counseling and interpersonal skills (Munson, Zoerink & Stadulis, 1986) and beginning practicum students and trainees (Easton, Martin, & Wilson, 2008; Johnson et al., 1989; Larson et al., 1992, 1993, 1999). Regarding the usefulness of a prepracticum course and practicum experiences for level of CSE, significant increases were only observed in the beginning practicum students with no significant changes seen in advanced students. Additionally, no previous studies have evaluated the success of CSE interventions with clinicians postlicensure.

 

It also is plausible that failure to detect an effect was due to the high preintervention levels of CSE observed across clinicians. At baseline, clinicians in the QAI condition reported CSE levels of roughly 71.9% of maximum potential, whereas those in the W condition reported CSE levels of 71.3% of maximum potential. Previous research has found high levels of CSE among practitioners with comparable amounts of previous experience, with those having 5–10 years of experience reporting mean CSE levels of 4.35 out of five points possible (Melchert et al., 1996). Thus, the average level of CSE may be accounted for by the amount of previous education and training reported by clinicians, and the observed increase of 1.5% at postintervention may be a reflection of the sample composition.

 

Limitations

Due to a small sample size, the power to detect changes in CSE was modest. Because of efforts to increase power by increasing the sample size, the time between reports of pre- and postintervention levels of CSE varied within the sample. Some participants completed only a year or a year and a half instead of the full 2 years.

 

A further limitation was reliance on self-reported information from the participating clinicians regarding their level of CSE, quality of practice, and knowledge and usage of EBP. Thus, a presentation bias may have been present in that clinicians may have reported stronger confidence in their own abilities than they felt in reality, or may have inflated responses on their knowledge and usage of EBP.

 

An additional limitation concerns the fact that CSE was not included as an explicit factor in training. Increasing CSE was not an explicit goal, and training and supervision were not tailored so that increases in CSE were more likely. The relation between supervisory feedback and CSE also may depend on the developmental level and pretraining CSE level of the clinicians (Larson et al., 1999; Munson, Zoerink & Stadulis, 1986), with untrained individuals reporting large increases. Thus, increased performance feedback may or may not have enhanced CSE within this sample.

 

Future Directions

Based on these findings, future work is suggested to evaluate ways in which CSE can be increased among clinicians. As the training procedures utilized in this study failed to change CSE, it is important to determine what facets of CSE, if any, are conducive to change. Although the current study evaluated broad CSE, Bandura (1977) theorized that overall self-efficacy is determined by the efficacy and outcome expectancies an individual has regarding a particular behavior. Efficacy expectancies are individuals’ beliefs regarding their capabilities to successfully perform the requisite behavior. Efficacy expectancies serve mediational functions between individuals and their behavior, such that if efficacy expectancies are high, individuals will engage in the behavior because they believe that they will be able to successfully complete it. Outcome expectancies, on the other hand, involve individuals’ beliefs that a certain behavior will lead to a specific outcome, and mediate the relation between behaviors and outcomes. Therefore, when outcome expectancies are low, individuals will not execute that behavior because they do not believe it will lead to a specified outcome.

 

As with the current study, the majority of the existing studies investigating change in CSE have evaluated broad CSE without breaking the construct down into the two types of expectancies (i.e., efficacy expectancies and outcome expectancies). Larson and Daniels (1998) found that fewer than 15% of studies on CSE examined outcome expectancies, and of the studies that did, only 60% operationalized outcome expectancies appropriately. While clinicians may believe that they can effectively perform a counseling strategy, they may not implement said strategy if they do not believe that it will produce client change. Ways in which these concepts can be evaluated may include asking, for example, for level of confidence in one’s ability to effectively deliver relaxation training, as well as for level of confidence that relaxation training produces client change. Based on the dearth of work in this area, future efforts should involve breaking down CSE and correctly operationalizing efficacy expectancies and outcome expectancies to examine what sorts of influences these expectancies have on overall CSE.

 

Additionally, future efforts to investigate the enhancement of CSE may evaluate the pliability of this construct depending on level of training. Is CSE more stable among experienced clinicians compared to counseling trainees? Should CSE enhancement be emphasized among new clinicians? Or are different methods needed to increase one’s CSE depending on previous experience? This goal may be accomplished by obtaining sizeable, representative samples with beginning, moderate and advanced levels of training, and examining the long-term stability of CSE.

 

Future work should incorporate strategies of mastery, modeling, social persuasion and affective arousal to enhance the CSE of SMH clinicians. Although role-play was utilized in the current study, future interventions could include visual imagery or mental practice of performing counseling skills, discussions of CSE, and more explicit positive supervisory feedback. Furthermore, mastery experiences (i.e., engaging in a counseling session that the counselor interprets as successful) in actual or role-play counseling settings have been found to increase CSE (Barnes, 2004); however, this result is contingent on the trainee’s perception of session success (Daniels & Larson, 2001). Future efforts to enhance CSE could strategically test how to structure practice counseling sessions and format feedback in ways that result in mastery experiences for clinicians. Future investigations also may incorporate modeling strategies into counselor training, possibly within a group setting. Structuring modeling practices in a group rather than an individual format may facilitate a fluid group session, moving from viewing a skill set to practicing with other group members and receiving feedback. This scenario could provide counselors with both vicarious and mastery experiences.

 

The use of verbal persuasion—the third source of efficacy—to enhance CSE also has been evaluated in counseling trainees. Verbal persuasion involves communication of progress in counseling skills, as well as overall strengths and weaknesses (Barnes, 2004). While strength-identifying feedback has been found to increase CSE, identifying skills that need improvement has resulted in a decrease in CSE. Lastly, emotional arousal, otherwise conceptualized as anxiety, is theorized to contribute to level of CSE. As opposed to the aforementioned enhancement mechanisms, increases in counselor anxiety negatively predict counselor CSE (Hiebert, Uhlemann, Marshall, & Lee, 1998). Thus, it is not recommended that identification of skills that need improvement be utilized as a tactic to develop CSE. Finally, in addition to clinician self-ratings, future research should investigate CSE’s impact on performance as measured by supervisors, as well as clients. With growing momentum for SMH across the nation, it is imperative that all factors influencing client outcomes and satisfaction with services be evaluated, including CSE.

 

 

 

Conflict of Interest and Funding Disclosure

The authors reported no conflict of

interest or funding contributions for

the development of this manuscript.

 

 

 

References

 

Aarons, G. A. (2005). Measuring provider attitudes toward evidence-based practice: Consideration of organizational context and individual differences. Child and Adolescent Psychiatric Clinics of North America, 14, 255–271. doi:10.1016/j.chc.2004.04.008

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. doi:10.1037/0033-295X.84.2.191

Bandura, A. (1994). Self-efficacy. In V. S. Ramachandran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71–81). New York, NY: Academic Press.

Barnes, K. L. (2004). Applying self-efficacy theory to counselor training and supervision: A comparison of two approaches. Counselor Education and Supervision, 44, 56–69. doi:10.1002/j.1556-6978.2004.tb01860.x

Beidas, R. S., & Kendall, P. C. (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice, 17, 1–30. doi:10.1111/j.1468-2850.2009.01187.x

Beidas, R. S., Mychailyszyn, M. P., Edmunds, J. M., Khanna, M. S., Downey, M. M., & Kendall, P. C. (2012). Training school mental health providers to deliver cognitive-behavioral therapy. School Mental Health, 4, 197–206. doi:10.1007/s12310-012-9047-0

Berger, T. K. (2013). School counselors’ perceptions practices and preparedness related to issues in mental health (Doctoral dissertation). Retrieved from http://hdl.handle.net/1802/26892

Branch, L. E., & Lichtenberg, J. W. (1987, August). Self-efficacy and career choice. Paper presented at the convention of the American Psychological Association, New York, NY.

Burns, B. J., Costello, E. J., Angold, A., Tweed, D., Stangl, D., Farmer, E. M., & Erkanli, A. (1995). Children’s mental health service use across service sectors. Health Affairs, 14, 147–159. doi:10.1377/hlthaff.14.3.147

Chorpita, B. F., Becker, K. D., & Daleiden, E. L. (2007). Understanding the common elements of evidence-based practice: Misconceptions and clinical examples. Journal of the American Academy of Child and Adolescent Psychiatry, 46, 647–652. doi:10.1097/chi.0b013e318033ff71

Chorpita, B. F., & Daleiden, E. L. (2009). CAMHD biennial report: Effective psychosocial interventions for youth with behavioral and emotional needs. Honolulu, HI: Child and Adolescent Mental Health Division, Hawaii Department of Health.

Cohen, P., Cohen, J., Aiken, L. S., & West, S. G. (1999). The problem of units and the circumstances for POMP. Multivariate Behavioral Research, 34, 315–346. doi:10.1207/S15327906MBR3403_2

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum.

Conwill, W. L. (2003). Consultation and collaboration: An action research model for the full-service school. Consulting Psychology Journal: Practice and Research, 55, 239–248. doi:10.1037/1061-4087.55.4.239

Curry, J. F., & Reinecke, M. A. (2003). Modular therapy for adolescents with major depression. In M. A. Reinecke, F. M. Dattilio, & A. Freeman (Eds.), Cognitive therapy with children and adolescents (2nd ed., pp. 95–127). New York, NY: Guilford.

Daniels, J. A., & Larson, L. M. (2001). The impact of performance feedback on counseling self-efficacy and counselor anxiety. Counselor Education and Supervision, 41, 120–130. doi:10.1002/j.1556-6978.2001.tb01276.x

Dryfoos, J. G. (1993). Schools as places for health, mental health, and social services. Teachers College Record, 94, 540–567.

Easton, C., Martin, W. E., Jr., & Wilson, S. (2008). Emotional intelligence and implications for counseling self-efficacy: Phase II. Counselor Education and Supervision, 47, 218–232. doi:10.1002/j.1556-6978.2008.tb00053.x

Evans, S. W., Glass-Siegel, M., Frank, A., Van Treuren, R., Lever, N. A., & Weist, M. D. (2003). Overcoming the challenges of funding school mental health programs. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.), Handbook of school mental health: Advancing practice and research (pp. 73–86). New York, NY: Kluwer Academic/Plenum.

Evans, S. W., & Weist, M. D. (2004). Implementing empirically supported treatments in the schools: What are we asking? Clinical Child and Family Psychology Review, 7, 263–267. doi:10.1007/s10567-004-6090-0

Farmer, E. M., Burns, B. J., Phillips, S. D., Angold, A., & Costello, E. J. (2003). Pathways into and through mental health services for children and adolescents. Psychiatric Services, 54, 60–66. doi:10.1176/appi.ps.54.1.60

Gibson, S., & Dembo, M. H. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76, 569–582. doi:10.1037/0022-0663.76.4.569

Hiebert, B., Uhlemann, M. R., Marshall, A., & Lee, D. Y. (1998). The relationship between self-talk, anxiety, and counselling skill. Canadian Journal of Counselling and Psychotherapy, 32, 163–171.

Hoagwood, K. E. (2005). Family-based services in children’s mental health: A research review and synthesis. Journal of Child Psychology and Psychiatry, 46, 690–713. doi:10.1111/j.1469-7610.2005.01451.x

Iannelli, R. J. (2000). A structural equation modeling examination of the relationship between counseling self-efficacy, counseling outcome expectations, and counselor performance. (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database (9988728).

Johnson, E., Baker, S. B., Kopala, M., Kiselica, M. S., & Thompson, E. C., III (1989). Counseling self-efficacy and counseling competence in prepracticum training. Counselor Education and Supervision, 28, 205–218. doi:10.1002/j.1556-6978.1989.tb01109.x

Judge, T. A., Jackson, C. L., Shaw, J. C., Scott, B. A., & Rich, B. L. (2007). Self-efficacy and work-related performance: The integral role of individual differences. Journal of Applied Psychology, 92, 107–127. doi:10.1037/0021-9010.92.1.107

Kataoka, S. H., Zhang, L., & Wells, K. B. (2002). Unmet need for mental health care among U.S. children: Variation by ethnicity and insurance status. American Journal of Psychiatry, 159, 1548–1555. doi:10.1176/appi.ajp.159.9.1548

Kozina, K., Grabovari, N., De Stefano, J., & Drapeau, M. (2010). Measuring changes in counselor self-efficacy: Further validation and implications for training and supervision. The Clinical Supervisor, 29, 117–127. doi:10.1080/07325223.2010.517483

Kronick, R. F. (Ed.). (2000). Human services and the full service school: The need for collaboration. Springfield, IL: Thomas.

Langley, A. K., Nadeem, E., Kataoka, S. H., Stein, B. D., & Jaycox, L. H. (2010). Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health, 2, 105–113. doi:10.1007/s12310-010-9038-1

Larson, L. M., Cardwell, T. R., & Majors, M. S. (1996, August). Counselor burnout investigated in the context of social cognitive theory. Paper presented at the meeting of the American Psychological Association, Toronto, Canada.

Larson, L. M., Clark, M. P., Wesley, L. H., Koraleski, S. F., Daniels, J. A., & Smith, P. L. (1999). Video versus role plays to increase counseling self-efficacy in prepractica trainees. Counselor Education and Supervision, 38, 237–248. doi:10.1002/j.1556-6978.1999.tb00574.x

Larson, L. M., & Daniels, J. A. (1998). Review of the counseling self-efficacy literature. The Counseling Psychologist, 26, 179–218. doi:10.1177/0011000098262001

Larson, L. M., Daniels, J. A., Koraleski, S. F., Peterson, M. M., Henderson, L. A., Kwan, K. L., & Wennstedt, L. W. (1993, June). Describing changes in counseling self-efficacy during practicum. Poster presented at the meeting of the American Association of Applied and Preventive Psychology, Chicago, IL.

Larson, L. M., Suzuki, L. A., Gillespie, K. N., Potenza, M. T., Bechtel, M. A., & Toulouse, A. L. (1992). Development and validation of the counseling self-estimate inventory. Journal of Counseling Psychology, 39, 105–120. doi:10.1037/0022-0167.39.1.105

Leaf, P. J., Schultz, D., Kiser, L. J., & Pruitt, D. B. (2003). School mental health in systems of care. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.), Handbook of school mental health programs: Advancing practice and research (pp. 239–256). New York, NY: Kluwer Academic/Plenum.

Lewis, M. F., Truscott, S. D., & Volker, M. A. (2008). Demographics and professional practices of school psychologists: A comparison of NASP members and non-NASP school psychologists by telephone survey. Psychology in the Schools, 45, 467–482. doi:10.1002/pits.20317

Melchert, T. P., Hays, V. L., Wiljanen, L. M., & Kolocek, A. K. (1996). Testing models of counselor development with a measure of counseling self-efficacy. Journal of Counseling & Development, 74, 640–644. doi:10.1002/j.1556-6676.1996.tb02304.x

Mellin, E. A. (2009). Responding to the crisis in children’s mental health: Potential roles for the counseling profession. Journal of Counseling & Development, 87, 501–506. doi:10.1002/j.1556-6678.2009.tb00136.x

Munson, W. W., Stadulis, R. E., & Munson, D. G. (1986). Enhancing competence and self-efficacy of potential therapeutic recreators in decision-making counseling. Therapeutic Recreation Journal, 20(4), 85–93.

Munson, W. W., Zoerink, D. A., & Stadulis, R. E. (1986). Training potential therapeutic recreators for self-efficacy and competence in interpersonal skills. Therapeutic Recreation Journal, 20, 53–62.

Nabors, L. A., Reynolds, M. W., & Weist, M. D. (2000). Qualitative evaluation of a high school mental health program. Journal of Youth and Adolescence, 29, 1–13.

Powers, J. D., Edwards, J. D., Blackman, K. F., & Wegmann, K.M. (2013). Key elements of a successful multi-system collaboration for school-based mental health: In-depth interviews with district and agency administrators. The Urban Review, 45, 651–670. doi:10.1007/s11256-013-0239-4

President’s New Freedom Commission on Mental Health. (2003). Achieving the Promise: Transforming Mental Health Care in America. Final Report for the President’s New Freedom Commission on Mental Health (SMA Publication No. 03-3832). Rockville, MD: President’s New Freedom Commission on Mental Health.

Ramo, D. E., Prochaska, J. J., & Myers, M. G. (2010). Intentions to quit smoking among youth in substance abuse treatment. Drug and Alcohol Dependence, 106, 48–51. doi:10.1016/j.drugalcdep.2009.07.004.

Rones, M., & Hoagwood, K. (2000). School-based mental health services: A research review. Clinical Child and Family Psychology Review, 3, 223–241. doi:10.1023/A:1026425104386

Sharpe, P. A., Granner, M. L., Hutto, B. E., Wilcox, S., Peck, L., & Addy, C. L. (2008). Correlates of physical activity among African American and white women. American Journal of Health Behavior, 32, 701–713. doi:10.5555/ajhb.2008.32.6.701.

Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2003). Training in evidence-based interventions (EBIs): What are school psychology programs teaching? Journal of School Psychology, 41, 467–483. doi:10.1016/j.jsp.2003.07.002

Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124, 240–261. doi:10.1037/0033-2909.124.2.240

Stephan, S. H., Weist, M., Kataoka, S., Adelsheim, S., & Mills, C. (2007). Transformation of children’s mental health services: The role of school mental health. Psychiatric Services, 58, 1330–1338. doi:10.1176/appi.ps.58.10.1330

Stephan, S., Westin, A., Lever, N., Medoff, D., Youngstrom, E., & Weist, M. (2012). Do school-based clinicians’ knowledge and use of common elements correlate with better treatment quality? School Mental Health, 4, 170–180. doi:10.1007/s12310-012-9079-8

Suldo, S. M., Friedrich, A., & Michalowski, J. (2010). Personal and systems-level factors that limit and facilitate school psychologists’ involvement in school-based mental health services. Psychology in the Schools, 47, 354–373. doi:10.1002/pits.20475

Sutton, J. M., Jr., & Fall, M. (1995). The relationship of school climate factors to counselor self-efficacy. Journal of Counseling & Development, 73, 331–336. doi:10.1002/j.1tb01759.x

Tang, M., Addison, K. D., LaSure-Bryant, D., Norman, R., O’Connell, W., & Stewart-Sicking, J. A. (2004). Factors that influence self-efficacy of counseling students: An exploratory study. Counselor Education and Supervision, 44, 70–80. doi:10.1002/j.1556-6978.2004.tb01861.x

Urbani, S., Smith, M. R., Maddux, C. D., Smaby, M. H., Torres-Rivera, E., & Crews, J. (2002). Skills-based training and counseling self-efficacy. Counselor Education and Supervision, 42, 92–106. doi:10.1002/j.1556-6978.2002.tb01802.x

Walrath, C. M., Bruns, E. J., Anderson, K. L., Glass-Siegal, M., & Weist, M. D. (2004). Understanding expanded school mental health services in Baltimore city. Behavior Modification, 28, 472–490. doi:10.1177/0145445503259501

Watson, J. C. (2012). Online learning and the development of counseling self-efficacy beliefs. The Professional Counselor, 2, 143–151.

Weist, M. D., Ambrose, M. G., & Lewis, C. P. (2006). Expanded school mental health: A collaborative community-school example. Children & Schools, 28, 45–50. doi:10.1093/cs/28.1.45

Weist, M. D., Evans, S. W., & Lever, N. A. (2003). Handbook of school mental health: Advancing practice and research. New York, NY: Kluwer Academic/Plenum.

Weist, M. D., Lever, N. A., Stephan, S. H., Anthony, L. G., Moore, E. A., & Harrison, B. R. (2006, February). School mental health quality assessment and improvement: Preliminary findings from an experimental study. Paper presented at the meeting of A System of Care for Children’s Mental Health: Expanding the Research Base, Tampa, FL.

Weist, M. D., Sander, M. A., Walrath, C., Link, B., Nabors, L., Adelsheim, S., . . . & Carrillo, K. (2005). Developing principles for best practice in expanded school mental health. Journal of Youth and Adolescence, 34, 7–13. doi:10.1007/s10964-005-1331-1

Weist, M., Lever, N., Stephan, S., Youngstrom, E., Moore, E., Harrison, B., . . . & Stiegler, K. (2009). Formative evaluation of a framework for high quality, evidence-based services in school mental health. School Mental Health, 1, 196–211. doi:10.1007/s12310-09-9018-5

Weist, M. D., Youngstrom, E. A., Stephan, S., Lever, N., Fowler, J., Taylor, L., . . . Hoagwood, K. (2014). Challenges and ideas from a research program on high-quality, evidence-based practice in school mental health. Journal of Clinical Child & Adolescent Psychology, 43, 244–255. doi:10.1080/15374416.2013.833097

Zeldin, A. L., Britner, S. L., & Pajares, F. (2008). A comparative study of the self-efficacy beliefs of successful men and women in mathematics, science, and technology careers. Journal of Research in Science Teaching, 45, 1036–1058. doi:10.1002/tea.20195

 

Bryn E. Schiele is a doctoral student at the University of South Carolina. Mark D. Weist is a professor at the University of South Carolina. Eric A. Youngstrom is a professor at the University of North Carolina at Chapel Hill. Sharon H. Stephan and Nancy A. Lever are associate professors at the University of Maryland. Correspondence can be addressed to Bryn E. Schiele, the Department of Psychology, Barnwell College, Columbia, SC 29208, schiele@email.sc.edu.