Counseling Self-Efficacy, Quality of Services and Knowledge of Evidence-Based Practices in School Mental Health

Bryn E. Schiele, Mark D. Weist, Eric A. Youngstrom, Sharon H. Stephan, Nancy A. Lever

Counseling self-efficacy (CSE), defined as one’s beliefs about his or her ability to effectively counsel a client, is an important precursor of effective clinical practice. While research has explored the association of CSE with variables such as counselor training, aptitude and level of experience, little attention has been paid to CSE among school mental health (SMH) practitioners. This study examined the influence of quality training (involving quality assessment and improvement, modular evidence-based practices, and family engagement/empowerment) versus peer support and supervision on CSE in SMH practitioners, and the relationship between CSE and practice-related variables. ANCOVA indicated similar mean CSE changes for counselors receiving the quality training versus peer support. Regression analyses indicated that regardless of condition, postintervention CSE scores significantly predicted quality of practice, knowledge of evidence-based practices (EBP) and use of EBP specific to treating depression. Results emphasize the importance of CSE in effective practice and the need to consider mechanisms to enhance CSE among SMH clinicians.

 

Keywords: self-efficacy, school mental health, evidence-based practices, counselor training, depression

 

 

There are major gaps between the mental health needs of children and adolescents and the availability of effective services to meet such needs (Burns et al., 1995; Kataoka, Zhang, & Wells, 2002). This recognition is fueling efforts to improve mental health services for youth in schools (Mellin, 2009; Stephan, Weist, Kataoka, Adelsheim, & Mills, 2007). At least 20% of all youth have significant mental health needs, with roughly 5% experiencing substantial functional impairment (Leaf, Schultz, Kiser, & Pruitt, 2003). Further, less than one third of children with such mental health needs receive any services at all.

 

The President’s New Freedom Commission on Mental Health (2003) documented the position of schools as a point of contact and universal natural setting for youth and families, recognizing schools as a key factor in the transformation of child and adolescent mental health services (Stephan et al., 2007). In the past 2 decades, there has been a significant push for full-service schools that expand beyond a sole focus on education, and employ community mental health practitioners to respond to the emotional and behavioral needs of students (Conwill, 2003; Dryfoos, 1993; Kronick, 2000). The education sector is the most common provider of mental health services for children and adolescents (Farmer, Burns, Phillips, Angold, & Costello, 2003), with 70%–80% of youth who receive any mental health services obtaining them at school (Burns et al., 1995; Rones & Hoagwood, 2000). Therefore, attention must be paid to the quantity, quality and effectiveness of school mental health (SMH) services.

 

School Mental Health

 

In recent years, SMH programs, supported by both school staff (e.g., school psychologists, social workers, counselors) and school-based community mental health clinicians, have emerged as a promising approach to the provision of mental health services for students and families (Weist, Evans, & Lever, 2003). The growth of these programs has facilitated investigation of what constitutes high-quality SMH service provision (Nabors, Reynolds, & Weist, 2000; Weist et al., 2005). This work has been supported and furthered by the Center for School Mental Health, a federally funded technical assistance and training program to advance SMH programs within the United States. In collaboration with other SMH centers (e.g., UCLA Center for Mental Health in Schools) and interdisciplinary networks focused on school health, consensus was reached to develop a guiding framework defining best practices in SMH (Weist et al., 2005). These principles call for appropriate service provision for children and families, implementation of interventions to meet school and student needs, and coordination of mental health programs in the school with related community resources, among other things. For further explication of the framework and its development, see Weist et al. (2005).

 

Simultaneously, research developments through the Center for School Mental Health facilitated implementation of modular evidence-based practices (EBP; see Chorpita, Becker & Daleiden, 2007; Chorpita & Daleiden, 2009). A modular approach for intervention involves training clinicians in core, effective strategies for disorders frequently encountered in children (e.g., attention-deficit/hyperactivity disorder [ADHD], anxiety, depression, disruptive behavior disorders [DBD]). This approach enables individualized, flexible implementation of evidence-based strategies without the constraints of a manualized approach (Curry & Reinecke, 2003). The third guiding component to enhance quality in SMH practices is development of strategies to effectively engage and empower families (see Hoagwood, 2005).

 

Despite the development of such a framework, SMH clinicians often struggle to implement high-quality, evidence-based services (Evans et al., 2003; Evans & Weist, 2004). These clinicians are constrained by a lack of sufficient time, training in EBP, appropriate supervision, and internal and external resources (Shernoff, Kratchowill & Stoiber, 2003). For instance, a survey by Walrath et al. (2004) of Baltimore SMH clinicians suggested that the ratio of clinicians to students was 1:250, and in order to meet the mental health needs of students, clinicians would have to increase clinical hours by 79 per week to remediate student difficulties. Additionally, the school environment is often characterized as chaotic, hectic and crisis-driven (Langley, Nadeem, Kataoka, Stein, & Jaycox, 2010), with SMH clinicians citing difficulties implementing EBP given the schedules of students. As a result of the challenges limiting use of EBP in daily SMH practice, researchers are now evaluating the influences on successful delivery of EBP in schools, including the personal qualities of SMH professionals (e.g., attitudes, beliefs, skills, training; Berger, 2013), as well as environmental factors (e.g., school administrative support, access to community resources, sufficient space for practice; Powers, Edwards, Blackman & Wegmann, 2013) that may predict high-quality services (see Weist et al., 2014).

 

Previous work examining factors related to the provision of evidence-based SMH services by SMH clinicians suggested that the highest-rated facilitators of effective SMH practice were personal characteristics (e.g., desire to deliver mental health services), attitudes and openness toward use of EBP, and adequate training (Beidas et al., 2012; Langley et al., 2010). Alternatively, SMH clinicians reported a number of administrative, school site and personal barriers as significant obstacles to appropriate service delivery; such barriers include lack of sufficient training, overwhelming caseload, job burnout and personal mental health difficulties (Langley et al., 2010; Suldo, Friedrich, & Michalowski, 2010).

 

While researchers have evaluated the influence of SMH provider personal characteristics in relation to the delivery of high-quality SMH services, little attention has been paid to the importance of counseling self-efficacy (CSE). CSE is widely accepted as an important precursor to competent clinical practice (Kozina, Grabovari, De Stefano, & Drapeau, 2010). Further, building CSE is considered an important strategy in active learning when providing training in evidence-based therapies (Beidas & Kendall, 2010), and CSE in EBP is believed to be essential to implementation (Aarons, 2005). However, researchers have yet to systematically include measures of CSE in studies of EBP utilization by SMH providers.

 

Self-Efficacy

 

     Social-cognitive theory and its central construct, self-efficacy, have received much attention in the psychological literature, with more than 10,000 studies including these as central variables in the past 25 years (Judge, Jackson, Shaw, Scott, & Rich, 2007). Self-efficacy is defined as an individual’s beliefs about his or her ability to achieve desired levels of performance (Bandura, 1994), and it plays a key role in the initiation and maintenance of human behavior (Iannelli, 2000). Given the influence of self-efficacy expectancies on performance, researchers have evaluated how self-efficacy impacts a variety of action-related domains, including career selection (e.g., Branch & Lichtenberg, 1987; Zeldin, Britner, & Pajares, 2008), health-behavior change (e.g., Ramo, Prochaska, & Myers, 2010; Sharpe et al., 2008) and work-related performance (e.g., Judge et al., 2007; Stajkovic & Luthans, 1998). Specific to the mental health field, previous investigations have focused on how self-efficacy is related to counseling performance.

 

Counseling Self-Efficacy

The construct of CSE is defined as an individual’s beliefs about his or her ability to effectively counsel a client in the near future (Larson & Daniels, 1998). Studies of the structure and influence of CSE among a variety of mental health professionals, including counseling trainees, master’s-level counselors, psychologists, school counselors and students from related professions (e.g., clergy, medicine) have yielded mixed findings. Social desirability, counselor personality, aptitude, achievement (Larson et al., 1992) and counselor age (Watson, 2012) have shown small to moderate associations with CSE. CSE also is related to external factors, including the perceived and objective work environment, supervisor characteristics, and level or quality of supervision (Larson & Daniels, 1998).

 

However, the relationship of CSE with level of training is unclear. For the most part, CSE is stronger for individuals with at least some counseling experience than for those with none (Melchert, Hays, Wiljanen, & Kolocek, 1996; Tang et al., 2004). While the amount of training and education obtained have been reported as statistically significant predictors of degree of CSE (Larson & Daniels, 1998; Melchert et al., 1996), more recent work has not supported the existence of such predictive relationships (Tang et al., 2004). It also has been suggested that once a counselor has obtained advanced graduate training beyond the master’s level, the influence of experience on CSE becomes rather minimal (Larson, Cardwell, & Majors, 1996; Melchert et al., 1996; Sutton & Fall, 1995).

 

Some work has been done to evaluate interventions aimed at enhancing CSE by utilizing the four primary sources of self-efficacy, as defined by Bandura (1977; i.e., mastery, modeling, social persuasion, affective arousal). In two studies involving undergraduate recreation students, Munson, Zoerink & Stadulis (1986) found that modeling with role-play and visual imagery served to enhance CSE greater than a wait-list control group. Larson et al. (1999) attempted to extend these findings utilizing a sample of practicum counseling trainees, and found that self-evaluation of success in the session moderated the level of CSE postintervention (Larson et al., 1999), with perception of success significantly impacting the potency of the role-play scenarios. The same effect was not found for individuals in the videotape condition.

 

In addition to impacting clinician performance, CSE has been reported to indirectly impact positive client outcome (Urbani et al., 2002); for example, CSE has been associated with more positive outcomes for clients, more positive self-evaluations and fewer anxieties regarding counseling performance (Larson & Daniels, 1998). Thus, increasing CSE, which decreases clinicians’ anxiety, is important for client outcomes, as anxiety is reported to decrease level of clinical judgment and performance (Urbani et al., 2002). While there is some evidence that CSE is influential for client outcomes, minimal work has been done to evaluate this relationship.

 

CSE has been evaluated in a variety of samples; however, little work has been done to evaluate CSE of SMH practitioners and the factors that play into its development. Additionally, although some investigation has been conducted on factors that impact SMH practitioners’ abilities and performance, CSE is an element that seldom has been studied.

 

The current study aimed to examine the influence of a quality assessment and improvement (QAI) intervention on CSE in SMH practitioners, as well as the importance of CSE in regard to practice-related domains. The primary question of interest was, Does an intervention focused on QAI (target) result in higher levels of CSE than a comparison condition involving a focus on professional wellness (W) and supervision (control)? We investigated the influence of differential quality training and supervision on one’s level of CSE by comparing postintervention CSE scores between each condition after evaluating preintervention equivalency of CSE levels. Thus, we hypothesized that long-term exposure to the QAI intervention, family engagement/empowerment and modular EBP would result in significantly higher reports of CSE from those exposed to the QAI intervention than those exposed to the comparison intervention. Based on previous research, it is possible that specific counselor characteristics (e.g., age, experience) would predict CSE, such that individuals who are older and have more experience counseling children and adolescents would have higher CSE (Melchert et al., 1996; Tang et al., 2004; Watson, 2012). Thus, when evaluating training effects, these variables were included as covariates in the analysis of the relation between CSE and training.

 

Secondarily, this study aimed to evaluate the relation of professional experiences to CSE following exposure to the intervention. For this aim, the research question was, Does postintervention level of CSE predict quality of self-reported SMH practice, as well as knowledge and use of EBP? We hypothesized that level of CSE would predict quality of SMH practice, as well as attitude toward, knowledge and use of EBP regardless of intervention condition.

 

Method

 

This article stems from a larger previous evaluation of a framework to enhance the quality of SMH (Weist et al., 2009), funded by the National Institute of Mental Health (#1R01MH71015; 2003-2007; M. Weist, PI). As a part of a 12-year research program on quality and EBP in SMH, researchers conducted a two-year, multisite (from community agencies in Delaware, Maryland, Texas) randomized controlled trial of a framework for high-quality and effective practice in SMH (EBP, family engagement/empowerment and systematic QAI) as compared to an enhanced treatment as usual condition (focused on personal and school staff wellness). Only the methods pertaining to the aims of the current study have been included here (see Stephan et al., 2012; Weist et al., 2009 for more comprehensive descriptions).

 

Participants

A sample of 72 SMH clinicians (i.e., clinicians employed by community mental health centers to provide clinical services within the school system) from the three SMH sites participated for the duration of the study (2004–2006), and provided complete data for all study measures via self-report. All clinicians were employed by community-based agencies with an established history of providing SMH prevention and intervention services to elementary, middle and high school students in both general and special education programs.

 

A total of 91 clinicians participated over the course of the study, with a sample size of 64 in Year 1 and 66 in Year 2, with 27 clinicians involved only in Year 2. Out of the Year 1 sample (35 QAI and 29 W), 24 participants did not continue into Year 2 (13 QAI and 11 W). Dropout showed no association with nonparticipation and did not differ between conditions (37% QAI versus 38% comparison dropout rate). Investigations in this particular study focused on individuals who had completed at least one year of the study and had submitted pre- and postintervention measures. The 72 participants were predominantly female (61 women, 11 men) and were 36 years old on average (SD = 11.03). In terms of race and ethnicity, participants identified as Caucasian (55%), African American (26%), Hispanic (18%) and Other (1%). Participants reported the following educational levels: graduate degree (83%), some graduate coursework (13%), bachelor’s degree (3%), and some college (1%).  In terms of experience, clinicians had roughly 6 years of prior experience and had worked for their current agency for 3 years on average. The obtained sample is reflective of SMH practitioners throughout the United States (Lewis, Truscott, & Volker, 2008).

 

Measures

 

     Counseling self-efficacy. Participants’ CSE was measured using the Counselor Self-Efficacy Scale (Sutton & Fall, 1995). The measure was designed to be used with school counselors, and was created using a sample of public school counselors in Maine. Sutton and Fall modified a teacher efficacy scale (Gibson & Dembo, 1984), resulting in a 33-item measure that reflected CSE and outcome expectancies. Results of a principal-component factor analysis demonstrated initial construct validity, indicating a three-factor structure, with the internal consistency of these three factors reported as adequate (.67–.75). However, the structure of the measure has received criticism, with some researchers arguing that the third factor does not measure outcome expectancies as defined by social-cognitive theory (Larson & Daniels, 1998). Thus, we made a decision to use the entire 33-item scale as a measure of overall CSE. Respondents were asked to rate each item using a 6-point Likert scale (1 = strongly disagree, 6 = strongly agree). We made slight language modifications to make the scale more applicable to the work of this sample (Weist et al., 2009); for instance, guidance program became counseling program. CSE was measured in both conditions at the beginning and end of Years 1 and 2 of the intervention program.

 

     Quality of school mental health services. The School Mental Health Quality Assessment Questionnaire (SMHQAQ) is a 40-item research-based measure developed by the investigators of the larger study to assess 10 principles for best practice in SMH (Weist et al., 2005; Weist et al., 2006), including the following: “Programs are implemented to address needs and strengthen assets for students, families, schools, and communities” and “Students, families, teachers and other important groups are actively involved in the program’s development, oversight, evaluation, and continuous improvement.”

 

At the end of Year 2, clinicians rated the degree to which each principle was present in their own practice on a 6-point Likert scale, ranging from not at all in place to fully in place. Given that results from a principle components analysis indicated that all 10 principles weighed heavily on a single strong component, analyses focused primarily on total scores of the SMHQAQ. Aside from factor analytic results, validity estimates are unavailable. Internal consistency as measured by coefficient alpha was very strong (.95).

 

     Knowledge and use of evidence-based practices. The Practice Elements Checklist (PEC) is based on the Hawaii Department of Health’s comprehensive summary of top modular EBP elements (Chorpita & Daleiden, 2007). Principal investigators of the larger study created the PEC in consultation with Bruce Chorpita of the University of California, Los Angeles, an expert in mental health technologies for children and adolescents. The PEC asks clinicians to provide ratings of the eight skills found most commonly across effective treatments for four disorder areas (ADHD, DBD, depression and anxiety). Respondents used a 6-point Likert scale to rate both current knowledge of the practice element (1= none and 6 = significant), as well as frequency of use of the element in their own practice, and frequency with which the clinician treats children whose primary presenting issue falls within one of the four disorder areas (1 = never, 6 = frequently).

 

In addition to total knowledge and total frequency subscales (scores ranging from 4–24), research staff calculated four knowledge and four frequency subscale scores (one for each disorder area) by averaging responses across practice elements for each disorder area (scores ranging from 1–6). Clinicians also obtained total PEC score by adding all subscale scores, resulting in a total score ranging from 16–92. Although this approach resulted in each item being counted twice, it also determined how total knowledge and skill usage are related to CSE, as well as skills in specific disorder areas. While internal consistencies were found to be excellent for each of the subscales, ranging from .84–.92, validity of the measure has yet to be evaluated. Clinicians completed the PEC at end of Year 2.

 

Study Design

SMH clinicians were recruited from their community agencies approximately 1 month prior to the initial staff training. After providing informed consent, clinicians completed a set of questionnaires, which included demographic information, level of current training and CSE, and were randomly assigned to the QAI intervention or the W intervention. Four training events were provided for participants in both conditions (at the beginning and end of both Years 1 and 2). During the four training events, individuals in the QAI condition received training in the three elements reviewed previously. For individuals involved in the W (i.e., comparison) condition, training events focused on general staff wellness, including stress management, coping strategies, relaxation techniques, exercise, nutrition and burnout prevention.

 

At each site, senior clinicians (i.e., licensed mental health professionals with a minimum of a master’s degree and 3 years experience in SMH) were chosen to serve as project supervisors for the condition to which they were assigned. These clinicians were not considered participants, and maintained their positions for the duration of the study. Over the course of the project, each research supervisor dedicated one day per week to the study, and was assigned a group of roughly 10 clinicians to supervise. Within the QAI condition, supervisors held weekly group meetings with small groups of five clinicians to review QAI processes and activities in their schools, as well as strategies for using the evidence base; in contrast, there was no study-related school support for staff in the W condition.

 

Results

 

Preliminary Analyses and Scaling

     Analyses were conducted using SPSS, version 20; tests of statistical significance were conducted with a Bonferroni correction (Cohen, Cohen, West, & Aiken, 2003), resulting in the use of an alpha of .0045, two-tailed. To facilitate comparisons between variables, staff utilized a scaling method known as Percentage of Maximum Possible (POMP) scores, developed by Cohen, Cohen, Aiken, & West (1999). Using this method, raw scores are transformed so that they range from zero to 100%. This type of scoring makes no assumptions about the shape of the distributions, in contrast to z scores, for which a normal distribution is assumed. POMP scores are an easily understood and interpreted metric and cumulatively lead to a basis for agreement on the size of material effects in the domain of interest (i.e., interventions to enhance quality of services and use of EBP; Cohen et al., 1999).

 

Primary Aim

     Initial analyses confirmed retreatment equivalence for the two conditions, t (72) = –.383, p = .703. For individuals in the QAI condition, preintervention CSE scores averaged at 71.9% of maximum possible (SD = .09), while those in the comparison condition averaged at 71.3% of maximum possible (SD = .08). These scores were comparable to level of CSE observed in counseling psychologists with similar amounts of prior experience (Melchert et al., 1996).

 

Correlation analyses suggested that pretreatment CSE was significantly associated with age (r = .312, p = .008), race (r = –.245, p = .029), years of counseling experience (r = .313, p = .007) and years with the agency (r = .232, p = .048). Thus, these variables were included as covariates in an analysis of covariance (ANCOVA) evaluating changes in CSE between the QAI and comparison conditions. Results suggested a nonsignificant difference in change in CSE from pre- to postintervention between conditions, F (72) = .013, p = .910. For individuals in the QAI condition, postintervention CSE scores averaged at 73.1% of maximum possible (SD = .07), and for individuals in the comparison condition, CSE scores averaged at 72.8% of maximum possible (SD = .08). Additionally, when looking across conditions, results indicated a nonsignificant difference in change in level of CSE from pre- to postintervention, F (72) = .001, p = .971. Across conditions, clinicians reported roughly similar levels of CSE at pre- and postintervention time points (72% vs. 73% of maximum possible); see Table 1.

 

 

Table 1

 

Analysis of Covariance (ANCOVA) Summary of Change in CSE

 

Source

df

  F

  p

Partial η2

CSE

1

.001

.971

.000

CSE*Condition

1

.013

.910

.000

CSE*Age

1

.281

.598

.004

CSE*Race

1

1.190

.279

.018

CSE*Years of Experience

1

.032

.859

.000

CSE*Years with Agency

1

.003

.955

.000

Error

66

 

Note. N = 72.

 

 

Secondary Aim

     To investigate the influence of level of CSE on quality and practice elements in counseling, a series of individual regressions were conducted with level of postintervention CSE as the predictor variable, and indicators of attitudes toward EBP, knowledge and use of EBP, and use of quality mental health services as the outcome variables in separate analyses.

 

Table 2 shows that level of postintervention CSE significantly predicted the following postintervention variables: SMHQAQ quality of services (R2 = .328, F [60] = 29.34, p < .001); knowledge of EBP for ADHD (R2 = .205, F [46] = 11.54, p = .001), depression (R2 = .288, F [46]= 18.17, p < .001), DBD (R2 = .236, F [46]= 13.92, p = .001) and anxiety (R2 = .201, F [46]= 10.81, p = .002); usage of EBP specific to treating depression (R2 = .301, F [46]= 19.34, p < .001); and total knowledge of EBP (R2 = .297, F [44] = 18.20, p < .001). Results further indicated that postintervention CSE was not a significant predictor of usage of EBP for ADHD (R2 = .010, F [45] = .457, p = .502), DBD (R2 = .024, F [45] = 1.100, p = .300) and anxiety (R2 = .075, F [43] = 3.487, p = .069); and total usage of EBP (R2 = .090, F [43] = 4.244, p = .045).

 

 

Table 2

 

Results of Linear Regressions Between Level of Postintervention CSE and Outcome Variables

 

Variables

Beta

       R2

  Adjusted R2

      F   

        p

SMH Quality

0.573

0.328

0.317

29.337

0.000

EBP ADHD – Knowledge

0.452

0.205

0.187

11.583

0.001

EBP ADHD – Usage

0.100

0.010

–0.012

0.457

0.502

EBP Depression – Knowledge

0.536

0.288

0.272

18.168

0.000

EBP Depression – Usage

0.548

0.301

0.285

19.337

0.000

EBP DBD – Knowledge

0.486

0.236

0.219

13.922

0.001

EBP DBD – Usage

0.154

0.024

0.002

1.100

0.300

EBP Anxiety – Knowledge

0.448

0.201

0.182

10.811

0.002

EBP Anxiety – Usage

0.274

0.075

0.053

3.487

0.069

EBP Total Knowledge

0.545

0.297

0.281

18.197

0.000

EBP Total Usage

0.300

0.900

0.069

4.244

0.045

 

Note. To control for experiment-wise error, a Bonferroni correction was used and significance was evaluated at the 0.0045 level.

 

 

Discussion

 

While there has been some previous examination of the association between training and CSE, results have been mixed (see Larson & Daniels, 1998), and no such evaluations have been conducted within the context of SMH services. The current study stemmed from a larger evaluation of a framework to enhance the quality of SMH, targeting quality service provision, EBP, and enhancement of family engagement and empowerment (see Weist et al., 2009).

 

The present study had two primary aims. The first goal was to evaluate differences in level of CSE from pre- to postintervention between two groups of SMH clinicians. We expected that those who received information, training and supervision on QAI and best practice in SMH would report higher levels of CSE postintervention than those in the W condition. The secondary aim was to evaluate whether clinician reports of postintervention CSE would serve as predictors of quality of SMH practice, as well as knowledge and use of EBP. Given the influence that clinician CSE has been found to have on practice-related variables in previous studies (see Larson & Daniels, 1998), we hypothesized that higher level of CSE would significantly predict higher quality of SMH practice, and knowledge and usage of EBP.

 

Controlling for age, race, years of experience and years with the agency, findings did not confirm the primary hypothesis. No statistically significant differences in clinician reports of CSE from pre- to postintervention were observed between the QAI and W conditions. Regarding the secondary aim, however, clinician postintervention level of CSE was found to serve as a significant predictor of quality of practice; total knowledge of EBP specific to treating ADHD, DBD, anxiety and depression; and usage of EBP specific to treating depression. Findings are consistent with previous literature suggesting that CSE levels influence performance in a number of practice-related domains (Larson & Daniels, 1998).

 

Results did not support a significant predictive relation between CSE level and usage of EBP specific to treating ADHD, DBD and anxiety. The failure to find an association may be due to evaluating level of usage of EBP across conditions due to limited power to run the analyses by condition. Results from the original study suggested that individuals in the QAI condition were more likely to use established EBP in treatment (see Weist et al., 2009). Thus, as provider characteristics including CSE (Aarons, 2005) are known to be associated with adoption of EBP, it may be that examining these associations across conditions resulted in null findings.

 

While current results did support the importance of high CSE regarding practice-related domains, there was no significant difference in level of CSE between those who received information, training and supervision in QAI; use of EBP; and family engagement and empowerment compared to those in the W condition. Findings from the current study contrast with other research that has documented improvements in CSE following targeted interventions. Previous targeted interventions to increase CSE have resulted in positive outcomes when using micro-skills training and mental practice (Munson, Stadulis, & Munson, 1986; Munson, Zoerink, & Stadulis, 1986), role-play and visual imagery (Larson et al., 1999), a prepracticum training course (Johnson, Baker, Kopala, Kiselica, & Thompson, 1989) and practicum experiences (Larson et al., 1993).

 

As a curvilinear relation is reported to exist between CSE and level of training (Larson et al., 1996; Sutton & Fall, 1995), it may be that the amount of previous training and experience of this sample of clinicians, being postlicensure, was such that the unique experiences gained through the QAI and W conditions in the current study had a minimal impact on overall CSE. Many prior studies utilized students untrained in counseling and interpersonal skills (Munson, Zoerink & Stadulis, 1986) and beginning practicum students and trainees (Easton, Martin, & Wilson, 2008; Johnson et al., 1989; Larson et al., 1992, 1993, 1999). Regarding the usefulness of a prepracticum course and practicum experiences for level of CSE, significant increases were only observed in the beginning practicum students with no significant changes seen in advanced students. Additionally, no previous studies have evaluated the success of CSE interventions with clinicians postlicensure.

 

It also is plausible that failure to detect an effect was due to the high preintervention levels of CSE observed across clinicians. At baseline, clinicians in the QAI condition reported CSE levels of roughly 71.9% of maximum potential, whereas those in the W condition reported CSE levels of 71.3% of maximum potential. Previous research has found high levels of CSE among practitioners with comparable amounts of previous experience, with those having 5–10 years of experience reporting mean CSE levels of 4.35 out of five points possible (Melchert et al., 1996). Thus, the average level of CSE may be accounted for by the amount of previous education and training reported by clinicians, and the observed increase of 1.5% at postintervention may be a reflection of the sample composition.

 

Limitations

Due to a small sample size, the power to detect changes in CSE was modest. Because of efforts to increase power by increasing the sample size, the time between reports of pre- and postintervention levels of CSE varied within the sample. Some participants completed only a year or a year and a half instead of the full 2 years.

 

A further limitation was reliance on self-reported information from the participating clinicians regarding their level of CSE, quality of practice, and knowledge and usage of EBP. Thus, a presentation bias may have been present in that clinicians may have reported stronger confidence in their own abilities than they felt in reality, or may have inflated responses on their knowledge and usage of EBP.

 

An additional limitation concerns the fact that CSE was not included as an explicit factor in training. Increasing CSE was not an explicit goal, and training and supervision were not tailored so that increases in CSE were more likely. The relation between supervisory feedback and CSE also may depend on the developmental level and pretraining CSE level of the clinicians (Larson et al., 1999; Munson, Zoerink & Stadulis, 1986), with untrained individuals reporting large increases. Thus, increased performance feedback may or may not have enhanced CSE within this sample.

 

Future Directions

Based on these findings, future work is suggested to evaluate ways in which CSE can be increased among clinicians. As the training procedures utilized in this study failed to change CSE, it is important to determine what facets of CSE, if any, are conducive to change. Although the current study evaluated broad CSE, Bandura (1977) theorized that overall self-efficacy is determined by the efficacy and outcome expectancies an individual has regarding a particular behavior. Efficacy expectancies are individuals’ beliefs regarding their capabilities to successfully perform the requisite behavior. Efficacy expectancies serve mediational functions between individuals and their behavior, such that if efficacy expectancies are high, individuals will engage in the behavior because they believe that they will be able to successfully complete it. Outcome expectancies, on the other hand, involve individuals’ beliefs that a certain behavior will lead to a specific outcome, and mediate the relation between behaviors and outcomes. Therefore, when outcome expectancies are low, individuals will not execute that behavior because they do not believe it will lead to a specified outcome.

 

As with the current study, the majority of the existing studies investigating change in CSE have evaluated broad CSE without breaking the construct down into the two types of expectancies (i.e., efficacy expectancies and outcome expectancies). Larson and Daniels (1998) found that fewer than 15% of studies on CSE examined outcome expectancies, and of the studies that did, only 60% operationalized outcome expectancies appropriately. While clinicians may believe that they can effectively perform a counseling strategy, they may not implement said strategy if they do not believe that it will produce client change. Ways in which these concepts can be evaluated may include asking, for example, for level of confidence in one’s ability to effectively deliver relaxation training, as well as for level of confidence that relaxation training produces client change. Based on the dearth of work in this area, future efforts should involve breaking down CSE and correctly operationalizing efficacy expectancies and outcome expectancies to examine what sorts of influences these expectancies have on overall CSE.

 

Additionally, future efforts to investigate the enhancement of CSE may evaluate the pliability of this construct depending on level of training. Is CSE more stable among experienced clinicians compared to counseling trainees? Should CSE enhancement be emphasized among new clinicians? Or are different methods needed to increase one’s CSE depending on previous experience? This goal may be accomplished by obtaining sizeable, representative samples with beginning, moderate and advanced levels of training, and examining the long-term stability of CSE.

 

Future work should incorporate strategies of mastery, modeling, social persuasion and affective arousal to enhance the CSE of SMH clinicians. Although role-play was utilized in the current study, future interventions could include visual imagery or mental practice of performing counseling skills, discussions of CSE, and more explicit positive supervisory feedback. Furthermore, mastery experiences (i.e., engaging in a counseling session that the counselor interprets as successful) in actual or role-play counseling settings have been found to increase CSE (Barnes, 2004); however, this result is contingent on the trainee’s perception of session success (Daniels & Larson, 2001). Future efforts to enhance CSE could strategically test how to structure practice counseling sessions and format feedback in ways that result in mastery experiences for clinicians. Future investigations also may incorporate modeling strategies into counselor training, possibly within a group setting. Structuring modeling practices in a group rather than an individual format may facilitate a fluid group session, moving from viewing a skill set to practicing with other group members and receiving feedback. This scenario could provide counselors with both vicarious and mastery experiences.

 

The use of verbal persuasion—the third source of efficacy—to enhance CSE also has been evaluated in counseling trainees. Verbal persuasion involves communication of progress in counseling skills, as well as overall strengths and weaknesses (Barnes, 2004). While strength-identifying feedback has been found to increase CSE, identifying skills that need improvement has resulted in a decrease in CSE. Lastly, emotional arousal, otherwise conceptualized as anxiety, is theorized to contribute to level of CSE. As opposed to the aforementioned enhancement mechanisms, increases in counselor anxiety negatively predict counselor CSE (Hiebert, Uhlemann, Marshall, & Lee, 1998). Thus, it is not recommended that identification of skills that need improvement be utilized as a tactic to develop CSE. Finally, in addition to clinician self-ratings, future research should investigate CSE’s impact on performance as measured by supervisors, as well as clients. With growing momentum for SMH across the nation, it is imperative that all factors influencing client outcomes and satisfaction with services be evaluated, including CSE.

 

 

 

Conflict of Interest and Funding Disclosure

The authors reported no conflict of

interest or funding contributions for

the development of this manuscript.

 

 

 

References

 

Aarons, G. A. (2005). Measuring provider attitudes toward evidence-based practice: Consideration of organizational context and individual differences. Child and Adolescent Psychiatric Clinics of North America, 14, 255–271. doi:10.1016/j.chc.2004.04.008

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. doi:10.1037/0033-295X.84.2.191

Bandura, A. (1994). Self-efficacy. In V. S. Ramachandran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71–81). New York, NY: Academic Press.

Barnes, K. L. (2004). Applying self-efficacy theory to counselor training and supervision: A comparison of two approaches. Counselor Education and Supervision, 44, 56–69. doi:10.1002/j.1556-6978.2004.tb01860.x

Beidas, R. S., & Kendall, P. C. (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice, 17, 1–30. doi:10.1111/j.1468-2850.2009.01187.x

Beidas, R. S., Mychailyszyn, M. P., Edmunds, J. M., Khanna, M. S., Downey, M. M., & Kendall, P. C. (2012). Training school mental health providers to deliver cognitive-behavioral therapy. School Mental Health, 4, 197–206. doi:10.1007/s12310-012-9047-0

Berger, T. K. (2013). School counselors’ perceptions practices and preparedness related to issues in mental health (Doctoral dissertation). Retrieved from http://hdl.handle.net/1802/26892

Branch, L. E., & Lichtenberg, J. W. (1987, August). Self-efficacy and career choice. Paper presented at the convention of the American Psychological Association, New York, NY.

Burns, B. J., Costello, E. J., Angold, A., Tweed, D., Stangl, D., Farmer, E. M., & Erkanli, A. (1995). Children’s mental health service use across service sectors. Health Affairs, 14, 147–159. doi:10.1377/hlthaff.14.3.147

Chorpita, B. F., Becker, K. D., & Daleiden, E. L. (2007). Understanding the common elements of evidence-based practice: Misconceptions and clinical examples. Journal of the American Academy of Child and Adolescent Psychiatry, 46, 647–652. doi:10.1097/chi.0b013e318033ff71

Chorpita, B. F., & Daleiden, E. L. (2009). CAMHD biennial report: Effective psychosocial interventions for youth with behavioral and emotional needs. Honolulu, HI: Child and Adolescent Mental Health Division, Hawaii Department of Health.

Cohen, P., Cohen, J., Aiken, L. S., & West, S. G. (1999). The problem of units and the circumstances for POMP. Multivariate Behavioral Research, 34, 315–346. doi:10.1207/S15327906MBR3403_2

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum.

Conwill, W. L. (2003). Consultation and collaboration: An action research model for the full-service school. Consulting Psychology Journal: Practice and Research, 55, 239–248. doi:10.1037/1061-4087.55.4.239

Curry, J. F., & Reinecke, M. A. (2003). Modular therapy for adolescents with major depression. In M. A. Reinecke, F. M. Dattilio, & A. Freeman (Eds.), Cognitive therapy with children and adolescents (2nd ed., pp. 95–127). New York, NY: Guilford.

Daniels, J. A., & Larson, L. M. (2001). The impact of performance feedback on counseling self-efficacy and counselor anxiety. Counselor Education and Supervision, 41, 120–130. doi:10.1002/j.1556-6978.2001.tb01276.x

Dryfoos, J. G. (1993). Schools as places for health, mental health, and social services. Teachers College Record, 94, 540–567.

Easton, C., Martin, W. E., Jr., & Wilson, S. (2008). Emotional intelligence and implications for counseling self-efficacy: Phase II. Counselor Education and Supervision, 47, 218–232. doi:10.1002/j.1556-6978.2008.tb00053.x

Evans, S. W., Glass-Siegel, M., Frank, A., Van Treuren, R., Lever, N. A., & Weist, M. D. (2003). Overcoming the challenges of funding school mental health programs. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.), Handbook of school mental health: Advancing practice and research (pp. 73–86). New York, NY: Kluwer Academic/Plenum.

Evans, S. W., & Weist, M. D. (2004). Implementing empirically supported treatments in the schools: What are we asking? Clinical Child and Family Psychology Review, 7, 263–267. doi:10.1007/s10567-004-6090-0

Farmer, E. M., Burns, B. J., Phillips, S. D., Angold, A., & Costello, E. J. (2003). Pathways into and through mental health services for children and adolescents. Psychiatric Services, 54, 60–66. doi:10.1176/appi.ps.54.1.60

Gibson, S., & Dembo, M. H. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76, 569–582. doi:10.1037/0022-0663.76.4.569

Hiebert, B., Uhlemann, M. R., Marshall, A., & Lee, D. Y. (1998). The relationship between self-talk, anxiety, and counselling skill. Canadian Journal of Counselling and Psychotherapy, 32, 163–171.

Hoagwood, K. E. (2005). Family-based services in children’s mental health: A research review and synthesis. Journal of Child Psychology and Psychiatry, 46, 690–713. doi:10.1111/j.1469-7610.2005.01451.x

Iannelli, R. J. (2000). A structural equation modeling examination of the relationship between counseling self-efficacy, counseling outcome expectations, and counselor performance. (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database (9988728).

Johnson, E., Baker, S. B., Kopala, M., Kiselica, M. S., & Thompson, E. C., III (1989). Counseling self-efficacy and counseling competence in prepracticum training. Counselor Education and Supervision, 28, 205–218. doi:10.1002/j.1556-6978.1989.tb01109.x

Judge, T. A., Jackson, C. L., Shaw, J. C., Scott, B. A., & Rich, B. L. (2007). Self-efficacy and work-related performance: The integral role of individual differences. Journal of Applied Psychology, 92, 107–127. doi:10.1037/0021-9010.92.1.107

Kataoka, S. H., Zhang, L., & Wells, K. B. (2002). Unmet need for mental health care among U.S. children: Variation by ethnicity and insurance status. American Journal of Psychiatry, 159, 1548–1555. doi:10.1176/appi.ajp.159.9.1548

Kozina, K., Grabovari, N., De Stefano, J., & Drapeau, M. (2010). Measuring changes in counselor self-efficacy: Further validation and implications for training and supervision. The Clinical Supervisor, 29, 117–127. doi:10.1080/07325223.2010.517483

Kronick, R. F. (Ed.). (2000). Human services and the full service school: The need for collaboration. Springfield, IL: Thomas.

Langley, A. K., Nadeem, E., Kataoka, S. H., Stein, B. D., & Jaycox, L. H. (2010). Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health, 2, 105–113. doi:10.1007/s12310-010-9038-1

Larson, L. M., Cardwell, T. R., & Majors, M. S. (1996, August). Counselor burnout investigated in the context of social cognitive theory. Paper presented at the meeting of the American Psychological Association, Toronto, Canada.

Larson, L. M., Clark, M. P., Wesley, L. H., Koraleski, S. F., Daniels, J. A., & Smith, P. L. (1999). Video versus role plays to increase counseling self-efficacy in prepractica trainees. Counselor Education and Supervision, 38, 237–248. doi:10.1002/j.1556-6978.1999.tb00574.x

Larson, L. M., & Daniels, J. A. (1998). Review of the counseling self-efficacy literature. The Counseling Psychologist, 26, 179–218. doi:10.1177/0011000098262001

Larson, L. M., Daniels, J. A., Koraleski, S. F., Peterson, M. M., Henderson, L. A., Kwan, K. L., & Wennstedt, L. W. (1993, June). Describing changes in counseling self-efficacy during practicum. Poster presented at the meeting of the American Association of Applied and Preventive Psychology, Chicago, IL.

Larson, L. M., Suzuki, L. A., Gillespie, K. N., Potenza, M. T., Bechtel, M. A., & Toulouse, A. L. (1992). Development and validation of the counseling self-estimate inventory. Journal of Counseling Psychology, 39, 105–120. doi:10.1037/0022-0167.39.1.105

Leaf, P. J., Schultz, D., Kiser, L. J., & Pruitt, D. B. (2003). School mental health in systems of care. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.), Handbook of school mental health programs: Advancing practice and research (pp. 239–256). New York, NY: Kluwer Academic/Plenum.

Lewis, M. F., Truscott, S. D., & Volker, M. A. (2008). Demographics and professional practices of school psychologists: A comparison of NASP members and non-NASP school psychologists by telephone survey. Psychology in the Schools, 45, 467–482. doi:10.1002/pits.20317

Melchert, T. P., Hays, V. L., Wiljanen, L. M., & Kolocek, A. K. (1996). Testing models of counselor development with a measure of counseling self-efficacy. Journal of Counseling & Development, 74, 640–644. doi:10.1002/j.1556-6676.1996.tb02304.x

Mellin, E. A. (2009). Responding to the crisis in children’s mental health: Potential roles for the counseling profession. Journal of Counseling & Development, 87, 501–506. doi:10.1002/j.1556-6678.2009.tb00136.x

Munson, W. W., Stadulis, R. E., & Munson, D. G. (1986). Enhancing competence and self-efficacy of potential therapeutic recreators in decision-making counseling. Therapeutic Recreation Journal, 20(4), 85–93.

Munson, W. W., Zoerink, D. A., & Stadulis, R. E. (1986). Training potential therapeutic recreators for self-efficacy and competence in interpersonal skills. Therapeutic Recreation Journal, 20, 53–62.

Nabors, L. A., Reynolds, M. W., & Weist, M. D. (2000). Qualitative evaluation of a high school mental health program. Journal of Youth and Adolescence, 29, 1–13.

Powers, J. D., Edwards, J. D., Blackman, K. F., & Wegmann, K.M. (2013). Key elements of a successful multi-system collaboration for school-based mental health: In-depth interviews with district and agency administrators. The Urban Review, 45, 651–670. doi:10.1007/s11256-013-0239-4

President’s New Freedom Commission on Mental Health. (2003). Achieving the Promise: Transforming Mental Health Care in America. Final Report for the President’s New Freedom Commission on Mental Health (SMA Publication No. 03-3832). Rockville, MD: President’s New Freedom Commission on Mental Health.

Ramo, D. E., Prochaska, J. J., & Myers, M. G. (2010). Intentions to quit smoking among youth in substance abuse treatment. Drug and Alcohol Dependence, 106, 48–51. doi:10.1016/j.drugalcdep.2009.07.004.

Rones, M., & Hoagwood, K. (2000). School-based mental health services: A research review. Clinical Child and Family Psychology Review, 3, 223–241. doi:10.1023/A:1026425104386

Sharpe, P. A., Granner, M. L., Hutto, B. E., Wilcox, S., Peck, L., & Addy, C. L. (2008). Correlates of physical activity among African American and white women. American Journal of Health Behavior, 32, 701–713. doi:10.5555/ajhb.2008.32.6.701.

Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2003). Training in evidence-based interventions (EBIs): What are school psychology programs teaching? Journal of School Psychology, 41, 467–483. doi:10.1016/j.jsp.2003.07.002

Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124, 240–261. doi:10.1037/0033-2909.124.2.240

Stephan, S. H., Weist, M., Kataoka, S., Adelsheim, S., & Mills, C. (2007). Transformation of children’s mental health services: The role of school mental health. Psychiatric Services, 58, 1330–1338. doi:10.1176/appi.ps.58.10.1330

Stephan, S., Westin, A., Lever, N., Medoff, D., Youngstrom, E., & Weist, M. (2012). Do school-based clinicians’ knowledge and use of common elements correlate with better treatment quality? School Mental Health, 4, 170–180. doi:10.1007/s12310-012-9079-8

Suldo, S. M., Friedrich, A., & Michalowski, J. (2010). Personal and systems-level factors that limit and facilitate school psychologists’ involvement in school-based mental health services. Psychology in the Schools, 47, 354–373. doi:10.1002/pits.20475

Sutton, J. M., Jr., & Fall, M. (1995). The relationship of school climate factors to counselor self-efficacy. Journal of Counseling & Development, 73, 331–336. doi:10.1002/j.1tb01759.x

Tang, M., Addison, K. D., LaSure-Bryant, D., Norman, R., O’Connell, W., & Stewart-Sicking, J. A. (2004). Factors that influence self-efficacy of counseling students: An exploratory study. Counselor Education and Supervision, 44, 70–80. doi:10.1002/j.1556-6978.2004.tb01861.x

Urbani, S., Smith, M. R., Maddux, C. D., Smaby, M. H., Torres-Rivera, E., & Crews, J. (2002). Skills-based training and counseling self-efficacy. Counselor Education and Supervision, 42, 92–106. doi:10.1002/j.1556-6978.2002.tb01802.x

Walrath, C. M., Bruns, E. J., Anderson, K. L., Glass-Siegal, M., & Weist, M. D. (2004). Understanding expanded school mental health services in Baltimore city. Behavior Modification, 28, 472–490. doi:10.1177/0145445503259501

Watson, J. C. (2012). Online learning and the development of counseling self-efficacy beliefs. The Professional Counselor, 2, 143–151.

Weist, M. D., Ambrose, M. G., & Lewis, C. P. (2006). Expanded school mental health: A collaborative community-school example. Children & Schools, 28, 45–50. doi:10.1093/cs/28.1.45

Weist, M. D., Evans, S. W., & Lever, N. A. (2003). Handbook of school mental health: Advancing practice and research. New York, NY: Kluwer Academic/Plenum.

Weist, M. D., Lever, N. A., Stephan, S. H., Anthony, L. G., Moore, E. A., & Harrison, B. R. (2006, February). School mental health quality assessment and improvement: Preliminary findings from an experimental study. Paper presented at the meeting of A System of Care for Children’s Mental Health: Expanding the Research Base, Tampa, FL.

Weist, M. D., Sander, M. A., Walrath, C., Link, B., Nabors, L., Adelsheim, S., . . . & Carrillo, K. (2005). Developing principles for best practice in expanded school mental health. Journal of Youth and Adolescence, 34, 7–13. doi:10.1007/s10964-005-1331-1

Weist, M., Lever, N., Stephan, S., Youngstrom, E., Moore, E., Harrison, B., . . . & Stiegler, K. (2009). Formative evaluation of a framework for high quality, evidence-based services in school mental health. School Mental Health, 1, 196–211. doi:10.1007/s12310-09-9018-5

Weist, M. D., Youngstrom, E. A., Stephan, S., Lever, N., Fowler, J., Taylor, L., . . . Hoagwood, K. (2014). Challenges and ideas from a research program on high-quality, evidence-based practice in school mental health. Journal of Clinical Child & Adolescent Psychology, 43, 244–255. doi:10.1080/15374416.2013.833097

Zeldin, A. L., Britner, S. L., & Pajares, F. (2008). A comparative study of the self-efficacy beliefs of successful men and women in mathematics, science, and technology careers. Journal of Research in Science Teaching, 45, 1036–1058. doi:10.1002/tea.20195

 

Bryn E. Schiele is a doctoral student at the University of South Carolina. Mark D. Weist is a professor at the University of South Carolina. Eric A. Youngstrom is a professor at the University of North Carolina at Chapel Hill. Sharon H. Stephan and Nancy A. Lever are associate professors at the University of Maryland. Correspondence can be addressed to Bryn E. Schiele, the Department of Psychology, Barnwell College, Columbia, SC 29208, schiele@email.sc.edu.

 

Becoming a Supervisor: Qualitative Findings on Self-Efficacy Beliefs of Doctoral Student Supervisors-in-Training

Melodie H. Frick, Harriet L. Glosoff

Counselor education doctoral students are influenced by many factors as they train to become supervisors. One of these factors, self-efficacy beliefs, plays an important role in supervisor development. In this phenomenological, qualitative research, 16 counselor education doctoral students participated in focus groups and discussed their experiences and perceptions of self-efficacy as supervisors. Data analyses revealed four themes associated with self-efficacy beliefs: ambivalence in the middle tier of supervision, influential people, receiving performance feedback, and conducting evaluations. Recommendations for counselor education and supervision, as well as future research, are provided.

Keywords: supervision, doctoral students, counselor education, self-efficacy, phenomenological, focus groups

Counselor education programs accredited by the Council for Accreditation and Related Educational Programs (CACREP) require doctoral students to learn supervision theories and practices (CACREP, 2009). Professional literature highlights information on supervision theories (e.g., Bernard & Goodyear, 2009), supervising counselors-in-training (e.g., Woodside, Oberman, Cole, & Carruth, 2007), and effective supervision interventions and styles (e.g., Fernando & Hulse-Killacky, 2005) that assist with supervisor training and development. Until recently, however, few researchers have studied the experiences of counselor education doctoral students as they prepare to become supervisors (Hughes & Kleist, 2005; Limberg et al., 2013; Protivnak & Foss, 2011) or “the transition from supervisee to supervisor” (Rapisarda, Desmond, & Nelson, 2011, p. 121). Specifically, an exploration of factors associated with the self-efficacy beliefs of counselor education doctoral student supervisors is warranted to expand this topic and enhance counselor education training of supervisor development.

Bernard and Goodyear (2009) described supervisor development as a process shaped by changes in self-perceptions and roles, much like counselors-in-training experience in their developmental stages. Researchers have examined factors that may influence supervisors’ development (e.g., experiential learning and the influence of feedback). For example, Nelson, Oliver, and Capps (2006) explored the training experiences of 21 doctoral students in two cohorts of the same counseling program and reported that experiential learning, the use of role-plays, and receiving feedback from both professors and peers were equally as helpful in learning supervision skills as the actual practice of supervising counselors-in-training. Conversely, a supervisor’s development may be negatively influenced by unclear expectations of the supervision process or dual relationships with supervisees, which may lead to role ambiguity (Bernard & Goodyear, 2009). For example, Nilsson and Duan (2007) examined the relationship between role ambiguity and self-efficacy with 69 psychology doctoral student supervisors and found that when participants received clear supervision expectations, they reported higher rates of self-efficacy.

Self-efficacy is one of the self-regulation functions in Bandura’s social cognitive theory (Bandura, 1986) and is a factor in Larson’s (1998) social cognitive model of counselor training (SCMCT). Self-efficacy, the differentiated beliefs held by individuals about their capabilities to perform (Bandura, 2006), plays an important role in counselor and supervisor development (Barnes, 2004; Cashwell & Dooley, 2001) and is influenced by many factors (Schunk, 2004). Along with the counselor’s training environment, self-efficacy beliefs may influence a counselor’s learning process and resulting counseling performance (Larson, 1998). Daniels and Larson (2001) conducted a quantitative study with 45 counseling graduate students and found that performance feedback influenced counselors’ self-efficacy beliefs; self-efficacy increased with positive feedback and decreased with negative feedback. Steward (1998), however, identified missing components in the SCMCT, such as the role and level of self-efficacy of the supervisor, the possible influence of a faculty supervisor, and doctoral students giving and receiving feedback to supervisees and members of their cohort. For example, results of both quantitative studies (e.g., Hollingsworth & Fassinger, 2002) and qualitative studies (e.g., Majcher & Daniluk, 2009; Nelson et al., 2006) indicate the importance of mentoring experiences and relationships with faculty supervisors to the development of doctoral students and self-efficacy in their supervisory skills.

During their supervision training, doctoral students are in a unique position of supervising counselors-in-training while also being supervised by faculty. For the purpose of this study, the term middle tier will be used to describe this position. This term is not often used in the counseling literature, but may be compared to the position of middle managers in the business field—people who are subordinate to upper managers while having the responsibility of managing subordinates (Agnes, 2003). Similar to middle managers, doctoral student supervisors tend to have increased responsibility for supervising future counselors, albeit with limited authority in supervisory decisions, and may have experiences similar to middle managers in other disciplines. For example, performance-related feedback as perceived by middle managers appears to influence their role satisfaction and self-efficacy (Reynolds, 2006). In Reynolds’s (2006) study, 353 participants who represented four levels of management in a company in the United States reported that receiving positive feedback from supervisors had an affirming or encouraging effect on their self-efficacy, and that their self-efficacy was reduced after they received negative supervisory feedback. Translated to the field of counselor supervision, these findings suggest that doctoral students who participate in tiered supervision and receive positive performance feedback may have higher self-efficacy.

Findings to date illuminate factors that influence self-efficacy beliefs, such as performance feedback, clear supervisor expectations and mentoring relations. There is a need, however, to examine what other factors enhance or detract from the self-efficacy beliefs of counselor education doctoral student supervisors to ensure effective supervisor development and training. The purpose of this study, therefore, was to build on previous research and further examine the experiences of doctoral students as they train to become supervisors in a tiered supervision model. The overarching research questions that guided this study included: (a) What are the experiences of counselor education doctoral students who work within a tiered supervision training model as they train to become supervisors? and (b) What experiences influenced their sense of self-efficacy as supervisors?

 

Method

 

Design

A phenomenological research approach was selected to explore how counselor education doctoral students experience and make meaning of their reality (Merriam, 2009), and to provide richer descriptions of the experiences of doctoral student supervisors-in-training, which a quantitative study may not afford. A qualitative design using a constructivist-interpretivist method provided the opportunity to interact with doctoral students via focus groups and follow-up questionnaires to explore their self-constructed realities as counselor supervisors-in-training, and the meaning they placed on their experiences as they supervised master’s-level students while being supervised by faculty supervisors. Focus groups were chosen as part of the design, as they are often used in qualitative research (Kress & Shoffner, 2007; Limberg et al., 2013), and multiple-case sampling increases confidence and robustness in findings (Miles & Huberman, 1994).

 

Participants

Sixteen doctoral students from three CACREP-accredited counselor education programs in the southeastern United States volunteered to participate in this study. These programs were selected due to similarity in supervision training among participants (e.g., all were CACREP-accredited, required students to take at least one supervision course, utilized a full-time cohort design), and were in close proximity to the principal investigator. None of the participants attended the first author’s university or had any relationships with the authors. Criterion sampling was used to select participants that met the criteria of providing supervision to master’s-level counselors-in-training and receiving supervision by faculty supervisors at the time of their participation. The ages of the participants ranged from 27–61 years with a mean age of 36 years (SD = 1.56). Fourteen of the participants were women and two were men; two participants described their race as African-American (12.5%), one participant as Asian-American (6.25%), 12 participants as Caucasian (75%), and one participant as “more than one ethnicity” (6.25%). Seven of the 16 participants reported having 4 months to 12 years of work experience as counselor supervisors (M = 2.5 years, SD = 3.9 years) before beginning their doctoral studies. At the time of this study, all participants had completed a supervision course as part of their doctoral program, were supervising two to six master’s students in the same program (M = 4, SD = 1.2), and received weekly supervision with faculty supervisors in their respective programs.

 

Researcher Positionality

In presenting results of phenomenological research, it is critical to discuss the authors’ characteristics as researchers, as such characteristics influence data collection and analysis. The authors have experience as counselors, counselor educators, and clinical supervisors. Both authors share an interest in understanding how doctoral students move from the role of student to the role of supervisor, especially when providing supervision to master’s students who may experience critical incidents (with their clients or in their own development). The first author became engaged when she saw the different emotional reactions of her cohort when faced with the gatekeeping process, whether the reactions were based on personality, prior supervision experience, or stressors from inside and outside of the counselor education program. She wondered how doctoral students in other programs experienced the aforementioned situations, what kind of structure other programs used to work with critical incidents that involve remediation plans, and if there were ways to improve supervision training. It was critical to account for personal and professional biases throughout the research process to minimize biases in the collection or interpretation of data. Bracketing, therefore, was an important step during analysis (Moustakas, 1994) to reduce researcher biases. The first author accomplished this by meeting with her dissertation committee and with the second author throughout the study, as well as using peer reviewers to assess researcher bias in the design of the study, research questions, and theme development.

 

Quality and Trustworthiness

To strengthen the rigor of this study, the authors addressed credibility, dependability, transferability and confirmability (Merriam, 2009). One way to reinforce credibility is to have prolonged and persistent contact with participants (Hunt, 2011). The first author contacted participants before each focus group to convey the nature, scope and reasons for the study. She facilitated 90-minute focus group discussions and allowed participants to add or change the summary provided at the end of each focus group. Further, information was gathered from each participant through a follow-up questionnaire and afforded the opportunity for participants to contact her through e-mail with additional questions or thoughts.

By keeping an ongoing reflexive journal and analytical memos, the first author addressed dependability by keeping a detailed account throughout the research study, indicating how data were collected and analyzed and how decisions were made (Merriam, 2009). The first author included information on how data were reduced and themes and displays were constructed, and the second author conducted an audit trail on items such as transcripts, analytic memos, reflection notes, and process notes connecting findings to existing literature.

Through the use of rich, thick description of the information provided by participants, the authors made efforts to increase transferability. In addition, they offered a clear account of each stage of the process as well as the demographics of the participants (Hunt, 2011) to promote transferability.

Finally, the first author strengthened confirmability by examining her role as a research instrument. Selected colleagues chosen as peer reviewers (Kline, 2008), along with the first author’s dissertation committee members, had access to the audit trail and discussed and questioned the authors’ decisions, further increasing the integrity of the design. Two doctoral students who had provided supervision and had completed courses in qualitative research, but who had no connection to the research study, volunteered to serve as peer reviewers. They reviewed the focus group protocol for researcher bias, read the focus group transcripts (with pseudonyms inserted) and questionnaires, and the emergent themes, to confirm or contest the interpretation of the data. Further, they reviewed the quotes chosen to support themes for richness of description and provided feedback regarding the textural-structural descriptions as they were being developed. Their recommendations, such as not having emotional reactions to participants’ comments, guided the authors in data collection and analysis.

 

Data Collection

Upon receiving approval from the university’s Institutional Review Board, the first author contacted the directors of three CACREP-accredited counselor education programs and discussed the purpose of the study, participants’ rights, and logistical needs. Program directors disseminated an e-mail about this study to their doctoral students, instructing volunteer participants to contact the first author about participating in the focus groups.

Within a two-week period, she conducted three focus groups—one at each counselor education program site. Each focus group included five to six participants and lasted approximately 90 minutes. She employed a semi-structured interview protocol consisting of 17 questions (see Appendix). The questions were based on an extensive literature review on counselor and supervisor self-efficacy studies (e.g., Bandura, 2006; Cashwell & Dooley, 2001; Corrigan & Schmidt, 1983; Fernando & Hulse-Killacky, 2005; Gore, 2006; Israelashvili & Socher, 2007; Steward, 1998; Tang et al., 2004). The initial questions were open and general at first, so as to not lead or bias the participants in their responses. As the focus groups continued, the first author explored more specific information about participants’ experiences as doctoral student supervisors, focusing questions around their responses (Kline, 2008). Conducting a semi-structured interview with participants ensured that she asked specific questions and addressed predetermined topics related to the focus of the study, while also allowing for freedom to follow up on relevant information provided by participants during the focus groups.

Approximately six to eight weeks after each focus group, participants received a follow-up questionnaire consisting of four questions: (a) What factors (inside and outside of the program) influence your perceptions of your abilities as a supervisor? (b) How do you feel about working in the middle tier of supervision (i.e., working between a faculty supervisor and the counselors-in-training that you supervise)? (c) What, if anything, could help you feel more competent as a supervisor? (d) How can your supervision training be improved? The purpose of the follow-up questions was to explore participants’ responses after they gained more experiences as supervisors and to provide a means for them to respond to questions about their supervisory experiences privately, without concern of peer judgment.

 

Data Analysis

 

Data analysis began during the transcription process, with analysis occurring simultaneously with the collection of the data. The first author transcribed, verbatim, the recording of each focus group and changed participant names to protect their anonymity. Data analysis was then conducted in three stages: first, data were analyzed to identify significant issues within each focus group; second, data were cross-analyzed to identify common themes across all three focus groups; and third, follow-up questionnaires were analyzed to corroborate established themes and to identify additional, or different themes.

During data analysis, a Miles and Huberman (1994) approach was employed by using initial codes from focus-group question themes. Inductive analysis occurred with immersion in the data by reading and rereading focus group transcripts. It was during this immersion process that the first author began to identify core ideas and differentiate meanings and emergent themes for each focus group. She accomplished data reduction by identifying themes in participants’ answers to the interview protocol and focus group discussions until saturation was reached, and displayed narrative data in a figure to organize and compare developed themes. Finally, she used deductive verification of findings with previous research literature. During within-group analysis, she identified themes if more than half (i.e., more than three participants) of a focus group reported similar experiences, feelings or beliefs. Likewise, in across-group analyses, she confirmed themes if statements made by more than half (more than eight) of the participants matched. There were three cases in which the peer reviewers and the first author had differences of opinion on theme development. In those cases, she made changes guided by the suggestions of the peer reviewers. In addition, she sent the final list of themes related to the research questions to the second author and other members of the dissertation committee for purposes of confirmability.

 

Results

 

Results of this phenomenological study revealed several themes associated with doctoral students’ perceptions of self-efficacy as supervisors (see Figure 1). Cross-group analyses are provided with participant quotes that are most relevant to each theme being discussed. Considerable overlap of four themes emerged across groups: ambivalence in the middle tier of supervision, influential people, receiving feedback, and conducting evaluations.

 

 

 

 

 

 

 

 

 

 

Figure 1. Emergent themes of doctoral student supervisors’ self-efficacy beliefs. Factors identified by doctoral student as affecting their self-efficacy as supervisors are represented with directional, bold-case arrows from each theme toward supervisor self-efficacy; below themes are sub-themes in each group connected with non-directional lines.

 

Ambivalence in the Middle Tier of Supervision

All participants noted how working in the middle tier of supervision brought up issues about their roles and perceptions about their capabilities as supervisors. All 16 participants reported feeling ambivalent about working in the middle tier, especially in relation to their role as supervisors and about dealing with critical incidents with supervisees involving the need for remediation. What follows is a presentation of representative quotations from one or two participants in the emergent sub-themes of role uncertainty and critical incidents/remediation.

 

Role Uncertainty. Participants raised the issue of role uncertainty in all three focus groups. For example, one participant described how it felt to be in the middle tier by stating the following:

I think that’s exactly how it feels [to be in the middle] sometimes….not really knowing how much you know, what does my voice really mean? How much of a say do we have if we have big concerns? And is what I recognize really a big concern? So I think kind of knowing that we have this piece of responsibility but then not really knowing how much authority or how much say-so we have in things, or even do I have the knowledge and experience to have much say-so?

Further, another participant expressed uncertainty regarding her middle-tier supervisory role as follows:

[I feel a] lack of power, not having real and true authority over what is happening or if something does happen, being able to make those concrete decisions…Where do I really fit in here? What am I really able to do with this supervisee?…kind of a little middle child, you know really not knowing where your identity really and truly is.  You’re trying to figure out who you really are.

Participants also indicated difficulty discerning their role when supervising counselors-in-training who were from different specialty areas such as college counseling, mental health counseling, and school counseling. All participants stated that they had not had any specific counseling or supervision training in different tracks, which was bothersome for nine participants who supervised students in specialties other than their own. For example, one participant stated the following:

I’m a mental health counselor and worked in the community and I have two school counselor interns, and so it was one of my very first questions was like, what do I do with these people? ’Cause I’m not aware of the differences and what I should be guiding them on anything.

Another participant noted how having more information on the different counseling tracks (e.g., mental health, school, college) would be helpful:

We’re going to be counselor educators. We may find ourselves having to supervise people in various tracks and I could see how it would be helpful for us to all have a little bit more information on a variety of tracks so that we could know what to offer, or how things are a little bit different.

Working in the middle tier of supervision appeared to be vexing for focus group participants. They expressed feelings of uncertainty, especially in dealing with critical incidents or remediation of supervisees. In addition to defining their roles as supervisors in the middle tier, another sub-theme emerged in which participants identified how they wanted to have a better understanding of how remediation plans work and have the opportunity to collaborate with faculty supervisors in addressing critical incidents with supervisees.

 

Critical Incidents/Remediation. Part of the focus group discussion centered on what critical incidents participants had with their supervisees and how comfortable they were, or would be, in implementing remediation plans with their supervisees. All participants expressed concerns about their roles as supervisors when remediation plans were required for master’s students in their respective programs and were uncertain of how the remediation process worked in their programs. Thirteen of the 16 participants expressed a desire to be a part of the remediation process of their supervisees in collaboration with faculty supervisors. They discussed seeing this as an important way to learn from the process, assuming that as future supervisors and counselor educators they will need to be the ones to implement such remediation plans. For example, one participant explained the following:

If we are in the position to provide supervision and we’re doing this to enhance our professional development so in the hopes that one day we’re going to be in the position of counselor educators, let’s say faculty supervisors, my concern with that is how are we going to know what to do unless we are involved [in the remediation process] now? And so I feel like that should be something that we’re provided that opportunity to do it.

Another participant indicated that she felt not being part of the remediation process took away the doctoral student supervisors’ credibility:

I don’t have my license yet, and I’m not sure how that plays into when there is an issue with a supervisee, but I know when there is an issue, there is something we have to do if you have a supervisee who is not performing as well, then that’s kind of taken out of your hands and given to a faculty. So they’re like, ‘Yeah you are capable of providing supervision,’ but when there’s an issue it seems like you’re no longer capable.

Another participant noted wanting “to see us do more of the cases where we need to do remediation” in order to be better prepared in identifying critical incidents, thus feeling more capable in the role as supervisor. Discussion on the middle tier proved to be a topic participants both related to and had concerns about. In addition to talking about critical incidents and the remediation process, another emergent theme included people within the participants’ training programs who were influential to their self-efficacy beliefs as supervisors.

 

Influential People

When asked about influences they had from inside and outside of their training programs, all participants identified people and things (e.g., previous work experience, support of significant others, conferences, spiritual meditation, supervision literature) as factors that affected their perceived abilities as supervisors. The specific factors most often identified by more than half of the participants, however, were the influence of supervisors and supervisees in their training programs.

 

Supervisors. All participants indicated that interactions with current and previous supervisors influenced their self-efficacy as supervisors. Ten participants reported supervisors modeling their supervision style and techniques as influential. For example, in regard to watching supervision tapes of the faculty supervisors, one participant stated that it has “been helpful for me to see the stance that they [faculty supervisors] take and the model that they use” when developing her own supervision skills. Seven participants also indicated having the space to grow as supervisors as a positive influence on their self-efficacy. One participant explained as follows:

I know people at other universities and it’s like boot camp, they [faculty supervisors] break them down and build them up in their own image like they’re gods. And I don’t feel that here. I feel like I’m able to be who I am and they’re supportive and helping me develop who I am.

In addition to the information provided during the focus groups, 11 focus group participants reiterated on their follow-up questionnaires that faculty supervisors had a positive influence on the development of their self-efficacy. For example, for one participant, “a lot of support from faculty supervisors in terms of their accessibility and willingness to answer questions” was a factor in strengthening her perception of her abilities as a counselor supervisor. Participants also noted the importance of working with their supervisees as beneficial and influential to their perceptions of self-efficacy as supervisors.

 

Supervisees. All participants in the focus groups discussed supervising counselors-in-training as having both direct and vicarious influences on their self-efficacy. One participant stated that having the direct experience of supervising counselors-in-training at different levels of training (e.g., pre-practicum, practicum, internship) was something that “really helped me to develop my ability as a supervisor.” In addition, one participant described a supervision session that influenced him as a supervisor: “When there are those ‘aha’ moments that either you both experience or they experience. That usually feels pretty good. So that’s when I feel the most competent, I think as a supervisor.” Further, another participant described a time when she felt competent as a supervisor: “When [the supervisees] reflect that they have taken what we’ve talked about and actually tried to implement it or it’s influenced their work, that’s when I have felt closest to competence.” In addition to working relations with supervisors and supervisees, receiving feedback was noted as an emergent theme and influential to the growth of the doctoral student supervisors.

 

Receiving Feedback

Of all of the emergent themes, performance feedback appeared to have the most overlap across focus groups. The authors asked participants how they felt about receiving feedback on their supervisory skills. Sub-themes emerged when participants identified receiving feedback from their supervisors, supervisees and peers as shaping to their self-efficacy beliefs as supervisors.

 

Supervisors. Fifteen participants discussed the process of receiving performance feedback from faculty as an important factor in their self-efficacy. Overall, participants reported receiving constructive feedback as critical to their learning, albeit with mixed reactions. One participant noted that “at the time it feels kind of crappy, but you learn something from it and you’re a better supervisor.” Some participants indicated how they valued their supervisors’ feedback and they preferred specific feedback over vague feedback. For example, as one participant explained, “I kind of just hang on her every word….it is important. I anticipate and look forward to that and am even somewhat disappointed if she kind of dances around an issue.” Constructive feedback was most preferred across all participants. In addition to the impact of receiving feedback from supervisors, participants commented on being influenced by the feedback they received from their supervisees.

 

Supervisees. Thirteen focus group participants reported that receiving performance evaluations from supervisees affected their sense of self-efficacy as supervisors and appeared to be beneficial to all participants. Participants indicated that they were more influenced by specific rather than general feedback, and they preferred receiving written feedback from their supervisees rather than having supervisees subjectively rate their performance with a number. One participant commented that “it’s more helpful for me when [supervisees] include written feedback versus just doing the number [rating]…something that’s more constructive.” Further, a participant described how receiving constructive feedback from supervisees influenced his self-efficacy as a supervisor:

I’d say it affects me a little bit. I’m thinking of some evaluations that I have received and some of them make me feel like I have that self-efficacy that I can do this. And then the other side, there have been some constructive comments as well, and some of those I think do influence me and help me develop.

Similar to feedback received from supervisors and supervisees, participants reiterated their preference in receiving clear and constructive feedback. Focus group participants also described receiving feedback from their peers as being influential in the development of their supervision skills.

 

Peers. Eleven participants shared that feedback received from peers was influential in shaping the perception of their skills and how they conducted supervision sessions. Participants described viewing videotapes of supervision sessions in group supervision and receiving feedback from peers on their taped supervision sessions as positive influences. For example, one participant stated that “there was one point in one of our classes when I’d shown a tape and I got some very… specific positive feedback [from peers] that made me feel really good, like made me feel more competent.” Another participant noted how much peers had helped her increase her comfort level in evaluating her supervisees: “I had a huge problem with evaluation when we started out….in supervision, my group really worked on that issue with me and I feel like I’m in a much better place.”

Performance feedback from faculty supervisors, supervisees, and peers was a common theme in all three focus groups and instrumental in the development of supervisory style and self-efficacy as supervisors. Constructive and specific feedback appeared to more positively influence participants’ self-efficacy than vague or unclear subjective rating scales. In addition to receiving performance feedback, another theme emerged when participants identified issues with providing supervisees’ performance evaluations.

 

Conducting Evaluations

Participants viewed evaluating supervisees with mixed emotions and believed that this process affected their self-efficacy beliefs as supervisors. Thirteen participants reported having difficulty providing supervisees with evaluative feedback. For example, one participant stated the following:

I had a huge problem with evaluation when we started out. It’s something I don’t like. I feel like I’m judging someone….And after, I guess, my fifth semester….I don’t feel like I’m judging them so much as it is a necessity of what we have to do, and as a gatekeeper we have to do this. And I see it more as a way of helping them grow now.

Conversely, one participant, who had experience as a supervisor before starting the doctoral counselor education program stated, “I didn’t really have too much discomfort with evaluating supervisees because of the fact that I was a previous supervisor before I got into this program.” Other participants, who either had previous experience with supervisory positions or who had been in the program for a longer period of time, confirmed this sentiment—that with more experience the anxiety-provoking feelings subsided.

All focus group participants, however, reported a lack of adequate instruction on how to conduct evaluations of supervisee performance. For example, participants indicated a lack of training on evaluating supervisees’ tapes of counseling sessions and in providing formal summative evaluations. One participant addressed how receiving more specific training in evaluating supervisees would have helped her feel more competent as a supervisor:

I felt like I had different experiences with different supervisors of how supervision was given, but I still felt like I didn’t know how to give the feedback or what all my options were, it would have just helped my confidence… to get that sort of encouragement that I’m on the right track or, so maybe more modeling specifically of how to do an evaluation and how to do a tape review.

All focus group participants raised the issue of using Likert-type questions as part of the evaluation process, specifically the subjectivity of interpretation of the scales in relation to supervisee performance and how supervisors used them differently. For example, a participant stated, “I wish there had been a little bit more concrete training in how to do an evaluation.” A second participant expanded this notion:

I would say about that scale it’s not only subjective but then our students, I think, talk to each other and then we’ve all evaluated them sometimes using the same form and given them a different number ’cause we interpret it differently…. It seems like another thing that sets us up for this weird ‘in the middle’ relationship because we’re not faculty.

Discussions about providing performance evaluations seemed to be one of the most vibrant parts of focus group discussions. Thus, it appears that having the support of influential people (e.g., supervisors and supervisees) and feedback from supervisors, supervisees and peers was helpful. Having more instruction on conducting evaluations and clarifying their role identity and expectations, however, would increase their sense of self as supervisors in the middle tier of supervision.

 

Discussion

 

The purpose of this study was to explore what counselor education doctoral students experienced working in the middle tier of supervision and how their experiences related to their sense of self-efficacy as beginning supervisors. Data analysis revealed alignment with previous research that self-efficacy of an individual or group is influenced by extrinsic and intrinsic factors, direct and vicarious experiences, incentives, performance achievements, and verbal persuasion (Bandura, 1986), and that a person’s self-efficacy may increase from four experiential sources: mastery, modeling, social persuasion, and affective arousal (Larson, 1998). For example, participants identified factors that influence their self-efficacy as supervisors such as the direct experience of supervising counselors-in-training (mastery) as “shaping,” and how they learned vicariously from others in supervision classes. Participants also noted the positive influence of observing faculty supervision sessions (modeling) and receiving constructive feedback by supervisors, supervisees, and peers (verbal persuasion). In addition, participants described competent moments with their supervisees as empowering performance achievements, especially when they observed growth of their supervisees resulting from exchanges in their supervision sessions. Further, participants indicated social persuasion via support from their peers and future careers as counselor supervisors and counselor educators were incentives that influenced their learning experiences. Finally, participants discussed how feelings of anxiety and self-doubt (affective arousal) when giving performance evaluations to supervisees influenced their self-efficacy as supervisors.

Results from this study also support previous research on receiving constructive feedback, structural support, role ambiguity, and clear supervision goals from supervisors as influential factors on self-efficacy beliefs (Bernard & Goodyear, 2009; Nilsson & Duan, 2007; Reynolds, 2006). In addition, participants’ difficulty in conducting evaluations due to feeling judgmental and having a lack of clear instructions on evaluation methods are congruent with supervision literature (e.g., Corey, Haynes, Moulton, & Muratori, 2010; Falender & Shafranske, 2004). Finally, participants’ responses bolster previous research findings that receiving support from mentoring relationships and having trusting relationships with peers positively influence self-efficacy (Hollingsworth & Fassinger, 2002; Wong-Wylie, 2007).

 

Implications for Practice

The comments from participants across the three focus groups underscore the importance of receiving constructive and specific feedback from their faculty supervisors. Providing specific feedback requires that faculty supervisors employ methods of direct observation of the doctoral student’s work with supervisees (e.g., live observation, recorded sessions) rather than relying solely on self-report. Participants also wanted more information on how to effectively and consistently evaluate supervisee performance, especially those involving Likert-type questions, and how to effectively supervise master’s students who are studying in different areas of concentration (e.g., mental health, school counseling, and college counseling). Counselor educators could include modules addressing these topics before or during the time that doctoral supervisors work with master’s students, providing both information and opportunities to practice or role-play specific scenarios.

In response to questions about dealing with critical incidents in supervision, participants across groups discussed the importance of being prepared in handling remediation issues and wanting specific examples of remediation cases as well as clarity regarding their role in remediation processes. Previous research findings indicate teaching about critical incidents prior to engaging in job requirements as effective (Collins & Pieterse, 2007; Halpern, Gurevich, Schwartz, & Brazeau, 2009). As such, faculty supervisors may consider providing opportunities to role-play and share tapes of supervision sessions with master’s students in which faculty (or other doctoral students) effectively address critical incidents. In addition, faculty could share strategies with doctoral student supervisors on the design and implementation of remediation plans, responsibilities of faculty and school administrators, the extent to which doctoral student supervisors may be involved in the remediation process (e.g., no involvement, co-supervise with faculty, or full responsibility), and the ethical and legal factors that may impact the supervisors’ involvement. Participants viewed being included in the development and implementation of remediation plans for master’s supervisees as important for their development even though some participants experienced initial discomfort in evaluating supervisees. This further indicates the importance of fostering supportive working relationships that promote students’ growth and satisfaction in supervision training.

 

Limitations

Findings from this study are beneficial to counselor doctoral students, counselor supervisors, and supervisors in various fields.  Limitations, however, exist in this study. The first is researcher perspective. The authors’ collective experiences influenced the inclusion of questions related to critical incidents and working in the middle tier of supervision. However, the first author made efforts to discern researcher bias by first examining her role as a research instrument before and throughout conducting this study, by triangulating sources, and by processing the interview protocol and analysis with peer reviewers and dissertation committee members. A second limitation is participant bias. Participants’ responses were based on their perceptions of events and recall. Situations participants experienced could have been colored or exaggerated and participants may have chosen safe responses in order to save face in front of their peers or in fear that faculty would be privy to their responses—an occurrence that may happen when using focus groups. The first author addressed this limitation by using follow-up questionnaires to provide participants an opportunity to express their views without their peers’ knowledge, and she reinforced confidentiality at the beginning of each focus group.

 

Recommendations for Future Research

Findings from this study suggest possible directions for future research. The first recommendation is to expand to a more diverse sample. The participants in this study were predominantly White (75%) and female (87.5%) from one region in the United States. As with all qualitative research, the findings from this study are not meant to be generalized to a wider group, and increasing the number of focus groups may offer a greater understanding as to the applicability of the current findings to doctoral student supervisors not represented in the current study. A second recommendation is to conduct a longitudinal study by following one or more cohorts of doctoral student supervisors throughout their supervision training to identify stages of growth and transition as supervisors, focusing on those factors that influence participants’ self-efficacy and supervisor development.

 

Conclusion

 

The purpose of this phenomenological study was to expand previous research on counselor supervision and to provide a view of doctoral student supervisors’ experiences as they train in a tiered supervision model. Findings revealed factors that may be associated with self-efficacy beliefs of doctoral students as they prepare to become counseling supervisors. Recommendations may assist faculty supervisors when considering training protocols and doctoral students as they develop their identities as supervisors.

 

References

Agnes, M. (Ed.). (2003). Webster’s new world dictionary (4th ed.). New York, NY: Wiley.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.

Bandura, A. (2006). Guide for creating self-efficacy scales. In F. Pajares & T. C. Urdan (Eds.), Self-efficacy beliefs of adolescents (pp. 307–337). Greenwich, CT: Information Age.

Barnes, K. L. (2004). Applying self-efficacy theory to counselor training and supervision: A comparison of two approaches. Counselor Education and Supervision, 44, 56–69. doi:10.1002/j.1556-6978.2004.tb01860.x

Bernard, J. M., & Goodyear, R. K. (2009). Fundamentals of clinical supervision (4th ed.). Upper Saddle River, NJ: Merrill/Pearson.

Cashwell, T. H., & Dooley, K. (2001). The impact of supervision on counselor self-efficacy. The Clinical Supervisor, 20(1), 39–47. doi:10.1300/J001v20n01_03

Collins, N. M., & Pieterse, A. L. (2007). Critical incident analysis based training: An approach for developing active racial/cultural awareness. Journal of Counseling & Development, 85, 14–23. doi:10.1002/j.1556-6678.2007.tb00439.x

Corey, G., Haynes, R., Moulton, P., & Muratori, M. (2010). Clinical supervision in the helping professions: A practical guide (2nd ed.). Alexandria, VA: American Counseling Association.

Corrigan, J. D., & Schmidt, L. D. (1983). Development and validation of revisions in the

counselor rating form. Journal of Counseling Psychology, 30, 64–75. doi:10.1037/0022-0167.30.1.64

Council for Accreditation of Counseling and Related Educational Programs. (2009). 2009 CACREP accreditation manual. Alexandria, VA: Author.

Daniels, J. A., & Larson, L. M. (2001). The impact of performance feedback on counseling self-efficacy and counseling anxiety. Counselor Education and Supervision, 41, 120–130. doi:10.1002/j.1556-6978.2001.tb01276.x

Falender, C. A., & Shafranske, E. P. (2004). Clinical supervision: A competency-based approach. Washington, DC: American Psychological Association.

Fernando, D. M., & Hulse-Killacky, D. (2005). The relationship of supervisory styles to satisfaction with supervision and the perceived self-efficacy of master’s-level counseling students. Counselor Education and Supervision, 44, 293–304. doi:10.1002/j.1556-6978.2005.tb01757.x

Gore, P. A., Jr. (2006). Academic self-efficacy as a predictor of college outcomes: Two incremental validity studies. Journal of Career Assessment, 14(1), 92–115. doi:10.1177/ 1069072705281367

Halpern, J., Gurevich, M., Schwartz, B., & Brazeau, P. (2009). What makes an incident critical for ambulance workers? Emotional outcomes and implications for intervention. Work & Stress, 23(2), 173–189. doi:10.1080/02678370903057317

Hollingsworth, M. A., & Fassinger, R. E. (2002). The role of faculty mentors in the research

training of counseling psychology doctoral students. Journal of Counseling Psychology, 49, 324–330. doi:10.1037/0022-0167.49.3.324

Hughes, F. R., & Kleist, D. M. (2005). First-semester experiences of counselor education

doctoral students. Counselor Education and Supervision, 45, 97–108. doi:10.1002/ j.1556-6978.2005.tb00133.x

Hunt, B. (2011). Publishing qualitative research in counseling journals. Journal of Counseling & Development, 89, 296–300. doi:10.1002/j.1556-6678.2011.tb00092.x

Israelashvili, M., & Socher, P. (2007). An examination of a counselor self-efficacy scale (COSE) using an Israeli sample. International Journal for the Advancement of Counselling, 29, 1–9. doi:10.1007/s10447-006-9019-0

Kline, W. B. (2008). Developing and submitting credible qualitative manuscripts. Counselor Education and Supervision, 47, 210–217. doi:10.1002/j.1556 6978.2008.tb00052.x

Kress, V. E., & Shoffner, M. F. (2007). Focus groups: A practical and applied research approach for counselors. Journal of Counseling & Development, 85(2), 189–195. doi:10.1002/j.1556-6678.2007.tb00462.x

Larson, L. M. (1998). The social cognitive model of counselor training. The Counseling Psychologist, 26(2), 219–273. doi:10.1177/0011000098262002

Limberg, D., Bell, H., Super, J. T., Jacobson, L., Fox, J., DePue, M. K, . . . Lambie, G. W. (2013). Professional identity development of counselor education doctoral students: A qualitative investigation. The Professional Counselor, 3(1), 40–53.

Majcher, J., & Daniluk, J. C. (2009). The process of becoming a supervisor for students in a doctoral supervision training course. Training and Education in Professional Psychology, 3, 63–71. doi:10.1037/a0014470

Merriam, S. B. (Ed.). (2002). Qualitative research in practice: Examples for discussion and analysis. San Francisco, CA: Jossey-Bass.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.

Moustakas, C. E. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.

Nilsson, J. E., & Duan, C. (2007). Experiences of prejudice, role difficulties, and counseling self-efficacy among U.S. racial and ethnic minority supervisees working with white supervisors. Journal of Multicultural Counseling and Development, 35(4), 219–229. doi:10.1002/j.2161-1912.2007.tb00062.x

Nelson, K. W., Oliver, M., & Capps, F. (2006). Becoming a supervisor: Doctoral student perceptions of the training experience. Counselor Education and Supervision, 46, 17–31. doi:10.1002/j.1556-6978.2006.tb00009.x

Protivnak, J. J., & Foss, L. L. (2009). An exploration of themes that influence the counselor education doctoral student experience. Counselor Education and Supervision, 48(4), 239–256. doi:10.1002/j.1556-6978.2009.tb00078.x

Rapisarda, C. A., Desmond, K. J., & Nelson, J. R. (2011). Student reflections on the journey to being a supervisor. The Clinical Supervisor, 30, 109–113. doi:10.1080/07325223.2011.564958

Reynolds, D. (2006). To what extent does performance-related feedback affect managers’ self-efficacy? International Journal of Hospitality Management, 25, 54–68. doi:10.1016/j.ijhm.2004.12.007

Schunk, D. H. (2004). Learning theories: An educational perspective (4th ed.). Upper Saddle River, NJ: Pearson/Merrill/Prentice Hall.

Steward, R. J. (1998). Connecting counselor self-efficacy and supervisor self-efficacy: The continued search for counseling competence. The Counseling Psychologist, 26, 285–294. doi:10.1177/0011000098262004

Tang, M., Addison, K. D., LaSure-Bryant, D., Norman, R., O’Connell, W., & Stewart-Sicking, J. A. (2004). Factors that influence self-efficacy of counseling students: An exploratory study. Counselor Education and Supervision, 44, 70–80. doi:10.1002/j.1556-6978.2004.tb01861.x

Wong-Wylie, G. (2007). Barriers and facilitators of reflective practice in counsellor education: Critical incidents from doctoral graduates. Canadian Journal of Counselling, 41(2), 59–76.

Woodside, M., Oberman, A. H., Cole, K. G., & Carruth, E. K. (2007). Learning to be a

counselor: A prepracticum point of view. Counselor Education and Supervision, 47, 14–28. doi:10.1002/j.1556-6978.2007.tb00035.x

 

Appendix

Focus Group Protocol

    1. How is your program designed to provide supervision training?
    2. What factors influence your perceptions of your abilities as supervisors?

Prompt: colleagues, professors, equipment, schedules, age, cultural factors such as gender, ethnicity, social class, whether you have had prior or no prior experience as supervisors.

    1. How does it feel to evaluate the supervisees’ performance?
    2. How, if at all, do your supervisees provide you with feedback about your performance?
    3. How do you feel about evaluations from your supervisees?

Prompt: How, if at all, do you think or feel supervisees’ evaluations influence how you perceive your skills as a supervisor?

    1. How, if at all, do your supervisors provide you with feedback about your performance?
    2. How do you feel about evaluations from your faculty supervisor?

Prompt: In what ways, if any, do evaluations from your faculty supervisor influence how you perceive your skills as a supervisor?

    1. What strengths or supports do you have in your program that guide you as a supervisor?
    2. What barriers or obstacles do you experience as a supervisor?
    3. What influences do you have from outside of the program that affect how you feel in your role as a supervisor?
    4. How does it feel to be in the middle tier of supervision: working between a faculty supervisor and master’s-level supervisee?

Prompt: Empowered, stuck in the middle, neutral, powerless.

    1. What, if any, critical incidents have you encountered in supervision?

Prompt: Supervisee that has a client who was suicidal or it becomes clear to you that a supervisee has not developed basic skills needed to work with current clients.

  1. If a critical incident occurred, or would occur in the future, what procedures did you or would you follow? How comfortable do you feel in having the responsibility of dealing with critical incidents?
  2. If not already mentioned by participants, ask if they have been faced with a situation in which their supervisee was not performing adequately/up to program expectations. If yes, ask them to describe their role in any remediation plan that was developed. If no, ask what concerns come to mind when they think about the possibility of dealing with such a situation.
  3. Describe a time when you felt least competent as a supervisor.
  4. Describe a time when you felt the most competent as a supervisor.
  5. How could supervision training be improved, especially in terms of anything that could help you feel more competent as a supervisor?

Melodie H. Frick, NCC, is an Assistant Professor at Western Carolina University. Harriett L. Glosoff, NCC, is a Professor at Montclair State University. Correspondence can be addressed to Melodie H. Frick, 91 Killian Building Lane, Room 204, Cullowhee, NC, 28723,  mhfrick@email.wcu.edu.