Aug 18, 2020 | Volume 10 - Issue 3
Jessica Lloyd-Hazlett, Cory Knight, Stacy Ogbeide, Heather Trepal, Noel Blessing
The coordination of primary and behavioral health care that holistically targets clients’ physical and mental needs is known as integrated care. Primary care is increasingly becoming a de facto mental health system because of behavioral health care shortages and patient preferences. Primary care behavioral health (PCBH) is a gold standard model used to assist in the integration process. Although counselor training addresses some aspects of integrated care, best practices for counselor education and supervision within the PCBH framework are underdeveloped. This article provides an overview of the Program for the Integrated Training of Counselors in Behavioral Health (PITCH). The authors discuss challenges in implementation; solutions; and implications for counselor training, clinical practice, and behavioral health workforce development.
Keywords: integrated care, primary care, counselor training, PITCH, behavioral health workforce development
In 2016, 18.3% of adults were diagnosed with a mental illness and 4.2% of adults were diagnosed with a serious mental illness (SMI; Substance Abuse and Mental Health Services Administration [SAMHSA], 2016). Of those with a mental illness, only 41% received mental health services, leaving more than half unserved (SAMHSA, 2015). Many of these untreated adults turn to their primary care provider (PCP) for help and report preference for behavioral health services within primary care (Ogbeide et al., 2018). In fact, data show that primary care has become the de facto mental health system in the United States (Robinson & Reiter, 2016).
Although PCPs attempt to provide pharmacological interventions and appropriate behavioral health referrals, patients often return still experiencing distress because they are unable to follow through on referrals (Cunningham, 2009; Robinson & Reiter, 2016). On average, this circular process results in substantially longer medical visits (e.g., 20 minutes versus 8 minutes) and fewer billable services (e.g., one versus five or more; Meadows et al., 2011). This also results in a significant increase in health care spending, with patients incurring 30%–40% higher costs because of the presence of a mental health condition (de Oliveira et al., 2016; Wammes et al., 2018). There is a need for professionals trained in behavioral health care working within the primary care setting (Serrano et al., 2018).
Counselor training addresses some aspects of the role of behavioral health professionals in primary care. The most recent version of the Council for Accreditation of Counseling and Related Educational Programs (CACREP) entry-level program standards mandates that all accredited programs, regardless of specialty, orient counseling students to “the multiple professional roles and functions of counselors across specialty areas, and their relationships with human service and integrated behavioral health care systems, including interagency and interorganizational collaboration and consultation” (CACREP, 2016, Standard F.1.b.). As patients’ needs and training mandates increase, there is a demand for counselor training programs to respond with models and practices for counselor training in behavioral health in primary care settings.
The Program for the Integrated Training of Counselors in Behavioral Health (PITCH) is a 4-year project sponsored by a Health Resources and Services Administration (HRSA) Behavioral Health Workforce Education and Training (BHWET) grant received by the Department of Counseling at the University of Texas at San Antonio. The purpose of this article is to describe this innovative program. Toward this end, we briefly outline the Primary Care Behavioral Health (PCBH) consultation model undergirding PITCH. Next, we describe the need for behavioral health integration in primary care settings. Then, we delineate our implementation of PITCH to date, including specialized field placements, training curriculum, and program evaluation methodologies. Following, we discuss challenges and resolutions gleaned from the first 1.5 years of implementation. Finally, we explore implications for counselor education to further enhance counselor preparation and engagement in behavioral health care delivery in primary care settings.
Primary Care Behavioral Health
The coordination of primary and behavioral health care that holistically targets clients’ physical and mental needs is known as integrated care (SAMHSA, 2015). One model used to assist in the integration process is the PCBH consultation model—a team-based and psychologically informed population health approach used to address physical and behavioral health concerns that arise in the primary care setting (Reiter et al., 2018). A hallmark of the PCBH model is integration of behavioral health consultants (BHCs), who dually function as generalist clinicians and as consultants to the primary care team (Serrano et al., 2018).
A BHC is different than a traditional counselor. In fulfilling their roles and functions, a BHC:
Assists in the care of patients of any age and with any health condition (Generalist); strives to intervene with all patients on the day they are referred (Accessible); shares clinic space and resources and assists the team in various ways (Team-based); engages with a large percentage of the clinic population (High volume); helps improve the team’s biopsychosocial assessment and interventions skills and processes (Educator); and is a routine part of psychosocial care (Routine). (Reiter et al., 2018, p. 112)
BHCs conduct brief functional assessments, collaborate with patients on treatment goals, implement evidence-based treatment interventions, and provide PCPs with feedback and recommendations for future patient care and support (Hunter et al., 2018). In addition, BHCs see patients for approximately 15–30-minute visits, with an average range between two and six visits per episode of care (Ray-Sannerud et al., 2012). In many ways, the BHC role involves a new professional identity for mental health professionals (Serrano et al., 2018). To date, BHC training and employment has typically involved social workers and psychologists. However, the counseling profession is increasingly recognized and engaged in integrated PCBH (HRSA, 2017).
Need for Integrated Services
Primary care settings must begin to consider behavioral health integration in order to increase the quality of life of their patients. Over recent years, there has been a significant increase in patients who receive psychotropic medication for mental health complaints in the primary care setting (Olfson et al., 2014). PCPs are managing increasingly complex diagnoses beyond anxiety and depression. These include bipolar, disruptive, and other comorbid disorders (Olfson et al., 2014). Individuals diagnosed with an SMI such as these also show a high prevalence of chronic health conditions, including diabetes and cardiovascular disease. Untreated psychological symptoms can often present themselves in somatic forms and can have a strong impact on chronic health conditions (McGough et al., 2016). People with SMIs prefer behavioral health services from their PCP; however, treatment outcomes for those with SMIs that seek services from their PCP are generally of lesser quality (Viron & Stern, 2010). Patient, provider, and systemic-level factors influence this phenomenon. Relevant factors may include impacts of patients’ mental health diagnoses on treatment adherence, misdiagnosis from PCPs, and minimal collaboration between medical and behavioral health providers (Viron & Stern, 2010).
The PITCH program addresses several critical needs of individuals seeking behavioral health services in the local community, where conditions that necessitate behavioral health services, including mental illness and substance use disorders, are common. In a focus group run in 2011 with members of the community, the group identified mental health as a key concern (Health Collaborative, 2013). Although mental health services were offered in a psychiatric facility for children, adolescents, and adults, members of the focus group reported that the demand for mental health providers and psychiatric beds exceeded the supply. The stigma associated with mental health also was seen as a barrier to care. As a result, many people go undiagnosed and untreated (SAMHSA, 2015).
PITCH also addresses the need for interdisciplinary approaches to behavioral health workforce development. The expansion of PCBH consultation services amplified this need (Robinson & Reiter, 2016). Unlike other models of integrated care (i.e., Collaborative Care Model, Chronic Care Model), the PCBH model makes available primary care–focused behavioral health services across an entire clinic population and across all possible patient presentations. This model also requires a skilled mental health professional adept at a variety of patient presentations and able to manage processes like clinic flow and a new role as consultant—skills and roles not commonly present in training for specialty mental health services (Robinson & Reiter, 2016).
PITCH: An Overview
PITCH is housed within a CACREP-accredited master’s-level clinical mental health counseling (CMHC) program enrolling more than 100 students each year. The principal investigator (PI) of PITCH is a professor specializing in clinical supervision, bilingual counselor education, and professional advocacy. Other PITCH team members include an assistant professor (Co-PI, university liaison) specializing in family counseling, program evaluation, and ethics; an assistant professor and board-certified clinical health psychologist (consultant); and an external project evaluator.
The primary purpose of PITCH is to develop a highly trained workforce of professional counselors to provide integrated behavioral health care (IBH) to rural, vulnerable, and underserved communities in primary care. Sub-goals of the PITCH program include establishing meaningful, longitudinal interdisciplinary partnerships as well as a graduate-level certificate in IBH to support sustainability. Toward this, 12 advanced counseling students enrolled in the aforementioned CMHC program are selected to participate each year from a competitive application pool. Selected trainees are required to complete two specialized IBH courses and two 300-hour clinical rotations in designated primary care settings. In exchange, trainees receive a $5,000 stipend upon completion of each semester rotation. Additionally, PITCH staff coordinate quarterly interprofessional trainings, including workshops focused on primary care, behavioral health, supervision, funding, and policy.
Specialized Field Placements
A unique feature of the PITCH program is the development of specialized field placement sites. Other behavioral health integration projects have relied on existing clinical placement sites (Sampson, 2017). Often these sites have low levels of existing integration, as well as underdeveloped infrastructure to support behavioral health delivery in primary care. When existing clinical site placements do have some integrated services, they are most often co-located services (Peek & the National Integration Academy Council, 2013). Instead of field site development, previous efforts have emphasized student training through workshops (Canada et al., 2018). These workshops are often open to community members. Individuals are then charged to bring knowledge back to extant clinical sites. Although this offers some positive benefits, it may not be as impactful. Further, this approach may fall short of establishing infrastructure to support longitudinal changes (Serrano et al., 2018).
To start development of specialized field placements, we identified potential sites interested in IBH delivery. We then set up initial meetings with sites to discuss the PITCH project and to determine the feasibility of placing a BHC trainee. If sites were amenable, we scheduled a series of follow-up visits to provide orientation to clinic staff on IBH, the PCBH model, and the role and scope of BHCs. During these visits, we also provided consultation on infrastructure components, such as electronic medical record documentation procedures, suggestions for clinic flow, and room spacing (Robinson & Reiter, 2016). Throughout the field placement, we remained active in checking with sites to make workflow adjustments as needed. Trainees complete certificate-based coursework prior to beginning field placements as well as during the clinical rotations.
Trainee Curriculum
Selected trainees are required to complete two specialized courses in IBH, as well as two 300-hour clinical rotations at one of the specialized field placement sites discussed above. The PCBH model scaffolds all aspects of the PITCH training and delivery. We utilize this model to support conceptualization of the BHC role in primary care settings, interventions, and supervision.
As part of the PITCH program, two didactic courses were created to provide training in IBH and PCBH. The courses were developed and instructed by the PITCH IBH consultant. The first course, IBH-I, introduces students to the primary care setting (e.g., family medicine, pediatrics, geriatrics), the PCBH model of care, behavioral health consultation, health behavior change, and common mental and chronic health conditions encountered in primary care, and offers a basic understanding of brief, cognitive-behavioral–based and solution-focused interventions used in primary care (Reiter et al., 2018; Robinson & Reiter, 2016).
Students must complete the following assignments in the course: two exams, an IBH journal article review, a primary care clinic tour, an interview with a PCP, a presentation on one commonly seen problem in primary care (e.g., insomnia, chronic pain, depression), and a term paper highlighting treatment on a common problem in primary care using the 5A’s model (Hunter & Goodie, 2010). The 5A’s is a behavioral change model that includes assessing, advising, agreeing, assisting, and arranging. Upon demonstrating satisfactory performance, students may enroll in IBH-II.
The primary purpose of the second course is to begin applying foundational knowledge of PCBH as well as practice functional and contextual assessment and cognitive-behavioral intervention skills in the primary care setting. Trainees demonstrate their skills through a series of in-class role-plays, leading up to a final evaluation of their performance in a 30-minute initial consultation visit with a standardized patient. Trainees must complete both courses to maintain their status in PITCH. Both courses are open as electives to students enrolled in the counseling program or a related discipline (e.g., social work).
PITCH trainees also complete two semester-long clinical rotations in primary care. Trainees are assigned to one of the specialized field placement sites based on availability, interest, and anticipated fit. Trainees are required to clock 300 hours each semester, 120 of which must represent direct clinical engagement. Direct clinical engagement time includes patient visits, consultation with the primary care team, and facilitating psychoeducational groups tailored to unique clinical populations. Trainees are required to participate in at least 1 hour of clinical supervision with an on-site supervisor each week. Additionally, trainees attend a bi-weekly group supervision course on campus instructed by a CMHC faculty member. After successful completion of didactic and clinical courses of the PITCH program, trainees are eligible to earn a graduate certificate in IBH. Adjustments to specialized field placement sites and the trainee curriculum are made as needed based on ongoing informal and formal evaluation of the program.
Program Evaluation
The HRSA BHWET grant supporting PITCH prioritizes evaluation activities related to workforce training and development effectiveness (HRSA, 2017). In partnership with our external evaluator, we are conducting program evaluation across several domains of PITCH, including evaluations focused on trainees and clinical sites (e.g., level of integration).
Trainee-Focused Metrics
We have several evaluation metrics that are focused on trainees. Trainees complete the Behavioral Health Consultant Core Competency Tool (BHC CC Tool; Robinson & Reiter, 2016) and the Primary Care Brief Intervention Competency Assessment Tool (BI-CAT; Robinson, 2015) at the beginning, midpoint, and conclusion of clinical rotations. The BHC CC Tool measures and tracks skill development across four domains of BHC practice: clinical practice, practice management, consultation, and documentation. The BI-CAT includes domains of practice context, intervention design, intervention delivery, and outcomes-based practice. On-site observations of trainees also are conducted using the PCBH Observation Tool as part of the certificate coursework. These competency tools were developed based on observations of BHC clinical behaviors likely to work effectively in a PCBH model of service delivery. These measures have not yet been formally assessed for psychometric properties or predictive outcomes (Robinson et al., 2018).
In addition to tools that target individual trainee development, program evaluation efforts also attend to the macro experiences of trainees in the program. Specifically, trainees participate in focus groups facilitated by the external evaluator at the end of each semester. Focus groups provide the opportunity to understand pathways and barriers to program development. We also have developed an online database to track trainees’ postgraduation employment trajectories and sustained engagement in PCBH.
Site-Focused Metrics
Although this particular HRSA grant is primarily concerned with trainee-focused outcomes (e.g., employment), we also ask identified clinical site liaisons to complete the Integrated Practice Assessment Tool (IPAT; Waxmonsky et al., 2013) at the start and finish of each rotation. Scores on the IPAT provide a snapshot estimation of the level of integration of clinical sites. Levels of integration correspond to those identified by A Standard Framework for Levels of Integrated Healthcare (Heath et al., 2013) and range from 1–6. Levels 1 and 2 are indicative of minimal, coordinated collaboration, with behavioral health and PCPs maintaining separate facilities and systems. Levels 3 and 4 reflect shared physical space and enhanced communication among behavioral health and PCPs; however, practice change toward system-level integration is underdeveloped. Finally, Levels 5 and 6 are indicative of transformed, team-based approaches in which both “providers and patients view the operation as a single health system treating the whole person” (Heath et al., 2013, p. 6). Focus groups also were conducted with members of selected clinical training sites to explore barriers and pathways to PCBH delivery as a function of level of integration. At this time, the IPAT has not yet been formally assessed for psychometric properties.
Rapid Cycle Quality Improvement
Finally, program evaluation efforts include ongoing rapid cycle quality improvement (RCQI), a quality-improvement method that identifies, implements, and measures changes to improve a process or a system (Center for Health Workforce Studies, 2016). RCQI can be targeted at different aspects of the program. To date, RCQI has targeted trainee competencies related to functional assessment interviews, breadth of referrals concerns, and patient visit length. For example, after tracking trends in daily activity logs submitted by trainees, we noted a majority of referrals centered on anxiety and depression. We then provided supplemental training on identifying behavioral health concerns related to chronic health conditions, such as diabetes and asthma. Following this instruction, we reviewed the daily activity logs and noted greater breadth of referral concerns.
Challenges and Solutions
Best practices for PCBH implementation within the context of workforce development are still developing. Further, available guidelines do not speak to counselor training programs specifically. In the section below, we discuss challenges we have encountered in the first 1.5 years of implementation of the PITCH program. We also share solutions we have generated to support optimal training experiences.
Challenge: On-Site Clinical Supervision
A significant challenge we encountered was related to on-site clinical supervision for the PITCH trainees. National accreditation standards require trainees to participate in regular supervision with both an on-site and university supervisor (CACREP, 2016). The on-site supervisor must have at least 2 years of postgraduate experience, as well as hold a master’s degree in counseling or a related field (e.g., psychology, social work). Furthermore, best practices for BHC training support a scaffolded supervision approach (Dobmeyer et al., 2003), wherein trainees’ initial time is spent completing 360 clinic shadowing visits with an experienced BHC. As trainee skills develop, leadership within patient visits transitions from co-visits to visits. In time, the trainee leads the visits, with an experienced BHC in independent practice shadowing. Additionally, the PCBH model emphasizes preceptor-style supervision, where the supervisor is readily available on-site for patient consultation as needed (Dobmeyer et al., 2003).
Solution: Changes to Specialized Field Placement Sites
During Year 1 of PITCH, almost two thirds of the specialized field placement sites we partnered with did not employ the PCBH model at the time, and thus did not have a BHC available to provide on-site clinical supervision. To meet this need, we provided intensive PCBH and supervision training to four doctoral students enrolled in our counselor education and supervision program. Doctoral student supervisors were asked to spend at least half a day on-site with trainees with this amount tapering off with time and experience.
Although this solution met national accreditation requirements for supervision (CACREP, 2016), we noticed stark differences between the clinical experiences of trainees placed at field sites with an on-site BHC versus doctoral student supervisors. As such, we made the difficult decision in Year 2 to separate from all but two field placement sites that lacked an on-site BHC to provide supervision. The inclusion of a BHC to supervise became a requirement for all the new sites we partnered with in Year 2. Additionally, we made modifications to our grant funding allocations to support graduate assistantships focused on supervision for two of the four doctoral supervisors utilized in Year 1.
Challenge: Knowledge About PCBH and the BHC Role
We encountered internal and external gaps in knowledge about the PCBH model, the BHC role, and the general culture of primary care settings. Internally, members of our faculty less connected to PITCH expressed support but also concern about alignment of PITCH training experiences and the experiences of other counseling students. Specific points of concern related to the brevity of visits, frequency of single encounters with patients, and the underpinning medical model. Additionally, because of patient privacy restrictions, PITCH field placement sites do not permit audio or video recording of clinical work, which is a typical supervision practice for counseling trainees. PITCH trainees also expressed some tension between the professional identity and skills training obtained in the CMHC program to date with the PCBH model and BHC role. Externally, we observed varying degrees of provider knowledge and buy-in about the PCBH approach to integrated practice. Areas of provider disconnect were more prominent at placement sites without existing integrated primary care services.
Solution: Ongoing Education and Advocacy
At the internal level, we provided a brief educational session about the PCBH model at regular faculty meetings. It was important to emphasize PCBH as a different context of practice that, similar to school counseling, requires modes of practice outside of traditional 50-minute sessions. We also sought faculty consultation related to curriculum and structure for our specialized coursework. For example, faculty members expressed concern about missing opportunities for recorded patient visits, so we developed two assignments for the clinical courses that could meet this need. The first was a mock visit with a classmate that was video recorded and transcribed. Students then analyzed micro-skills and reflected. The second assignment consisted of a live observation by the university- or site-based supervisors of the trainee’s work on-site with a patient.
We also encountered various levels of provider buy-in at our different sites. We encouraged students to reframe this resistance as an opportunity for learning and advocacy. As students gained knowledge about what we call the primary care way, students could better contextualize the questions or concerns of providers. For example, students could understand the premiums placed on time and space. From this position, students could tailor their approach to PCPs to enhance the PCP workflow. Additionally, faculty and supervisors emphasized the importance of ongoing psychoeducation about the PCBH model to their teams. Students are encouraged to be proactive in reviewing daily patient schedules for prospective services (i.e., scrubbing the schedule) and educating providers about how BHC services can augment patient care. The use of the BHC competency tools also facilitated this process, which encouraged students to consistently engage in behaviors conducive to BHC practice.
Challenge: Shortage of Spanish-Speaking Service Providers
A final challenge we faced related to a shortage of Spanish-speaking service providers. Some sites offered formal translation services (i.e., in-person medical translator, phone- or tablet-based translators), while others utilized informal resources (i.e., other staff members). When placing students, we prioritized placement of bilingual trainees at locations with the greatest number of Spanish-speaking patients. However, we were not able to accommodate all sites.
Solution: Recruitment and Resources
We have implemented several solutions to address this challenge. Among these, we have moved to weighing Spanish language fluency more heavily in PITCH selection criteria. We also are exploring future partnerships with the bilingual counseling certificate program that is housed in the University of Texas at San Antonio Department of Counseling. Additionally, we provide basic training and support to trainees related to the use of translators (in-person and virtual), and we have employed Spanish-speaking doctoral graduate assistant supervisors where possible for extra support.
Discussion
The implementation of PITCH provides challenges but also solutions to the growing need for counselor education to focus on training in primary care. Patients prefer behavioral health services in primary care (Ogbeide et al., 2018). Thus, equipping the behavioral health workforce to provide services in this setting has proved to be imperative. Although primary care and interprofessional education is relatively new to counselor education, other behaviorally inclined disciplines (e.g., psychology, social work, nursing) have provided a training blueprint for counselor education programs to use and continue developing a place for themselves in primary care (Hooper, 2014; Vogel et al., 2014).
Serrano and colleagues (2018) shared recommendations for PCBH workforce development. These recommendations include: (a) development of an interprofessional certification body; (b) PCBH-specific curricula in graduate studies, including both skills and program development; (c) a national employment clearinghouse; and finally, (d) coalescing knowledge around provision of technical assistance sites. Below we discuss the implications of counselor education programs seeking to advance PCBH workforce development.
Standardized Training Models
An important implication for training future counselors is the use of standardized training models (Tang et al., 2004). Throughout this article, much of the focus has centered on the PCBH consultation model (Reiter et al., 2018). In recent years, training standards have emerged for BHCs in primary care. These standards focus on a psychologically informed, population-based approach to treatment, in which BHCs are trained to create clinical pathways, collaborate with medical providers, conduct a brief functional assessment, and provide a brief behavioral intervention, mostly consisting of skills training and self-management (Reiter et al., 2018)—all of which is done in under 30 minutes. This clinical practice approach has become the de facto model in most BHC preparation programs throughout the United States (Hunter et al., 2018) and is currently endorsed by the Veterans Administration and the Department of Defense for integrated primary care (Funderburk et al., 2013). However, inconsistencies exist in how the PCBH model is taught, and there is a lack of available internship opportunities for master’s-prepared behavioral health providers to receive clinical training (Hall et al., 2015). This challenge is especially relevant to future counselors, who lack a standardized model of training for primary care (Hooper, 2014). Our experience suggests that programs such as PITCH accomplish the joint goals of focusing on instruction and supervised practice in PCBH, developing BHC competencies, and meeting accreditation standards of orienting counselors to their role in integrated care settings (CACREP, 2016).
Behavioral Health Integration
One of the largest challenges facing the PCBH model is behavioral health integration (Hunter & Goodie, 2010). Moreover, the PCBH model requires full integration (e.g., Level 5–6 integration) to be maximally effective. Traditionally, PCPs would refer patients to a local mental health practitioner for issues related to depression or anxiety (Cunningham, 2009). However, these referrals would result in a low rate of success and deter many individuals from seeking out mental health services in the future (Davis et al., 2016). Co-located care (an in-house mental health practitioner conducting traditional psychotherapy or counseling) became the logical next step. This level of integration resulted in quicker referrals but led to poor communication and confidentiality issues between PCPs and mental health providers. This also left out other common, behaviorally influenced conditions in primary care such as diabetes, chronic pain, hypertension, and tobacco cessation (which are not routinely addressed or treated by mental health providers). Full integration (in which PCPs and mental health providers work collaboratively in the same setting) has become the ideal standard for the integration of behavioral health services in primary care (Heath et al., 2013).
Despite the many benefits, full integration might be impractical for clinics just beginning PCBH services. Clinics may not have the staff support, leadership support, and organizational buy-in to be successful because “successful integration is really hard” (deGruy, 2015). Integration, in a sense, causes a necessary disruption in how a clinic functions and serves patients. Although necessary, it is still a disruption and it can take time for a team to normalize their new way of practicing primary care. Clinics may need specific support to help establish pathways for behavioral health referrals (Landis et al., 2013), allow clinic staff more time to adjust to integrated services, and provide a pathway for the development of fully integrated services (Reiter et al., 2018). Investing in technical assistance experts can aid in integration efforts (Serrano et al., 2018). Additionally, clinics that already offer co-located services might benefit from a quality-improvement plan (Wagner et al., 2001) such as a plan-do-study-act model (PDSA; Speroff & O’Connor, 2004) to move to a higher level of integration. A sample PDSA cycle might consist of identifying barriers to improved patient care, creating a team-based plan for addressing barriers, designating a project overseer, tracking outcomes across time, and evaluating project success (Speroff & O’Connor, 2004). Both suggestions are great steps toward full integration and can be performed by counselors and counselor educators with training in PCBH and program evaluation (Newcomer et al., 2015). Funding for counselors in BHC roles would assist in meeting the aforementioned goals.
Funding for Counselors in PCBH
One of the greatest barriers to providing accessible behavioral health services in primary care is funding (Robinson & Reiter, 2016). Insurers are just beginning to reimburse for same-day services (both a PCP and BHC visit; Robinson & Reiter, 2016). However, this recent development has primarily benefited psychologists and social workers in primary care and excludes licensed counselors, who account for 14%–25% of the mental health labor force (U.S. Department of Health and Human Services, 2016). Licensed counselors are a crucial part of the growing behavioral health workforce (Vogel et al., 2014) and bring a strong wellness and systems-based perspective to primary care (Sheperis & Sheperis, 2015). Furthermore, licensed counselors, along with other behavioral health providers, can help in a variety of ways such as reducing patient costs in the medical system (Berwick et al., 2008), reducing patient emergency room visits (Kwan et al., 2015), and implementing continuous quality improvement (Wagner et al., 2001).
Robinson and Reiter (2016) offered several suggestions regarding funding for BHCs unable to conduct same-day billing. The first is for BHCs to understand that PCPs will always be the main source of clinic revenue. Therefore, BHCs can provide support to the primary care team through behavioral consultation; improve screening and clinical pathway procedures; provide support for difficult patients and frequent visitors; and reduce PCP visit time through warm handoffs, with the patient witnessing the transfer of their care between PCP and BHC. Second, BHCs can secure bottom-up support from PCPs by providing “curbside” consultation services (consulting face-to-face with PCPs about a patient without directly treating the patient). It comes as no surprise that PCPs feel more supported when BHCs are an available part of the medical team. Third, BHCs can generate top-down support through billing for group visits such as drop-in group medical appointments and 30-minute follow-up visits (Robinson & Reiter, 2016). Finally, grants represent another potential source of funding for behavioral health implementation (HRSA, 2017, 2018). HRSA and SAMHSA have been a tremendous resource in providing training grants specifically aimed at increasing the BHC workforce (e.g., HRSA, 2017) and addressing the nation’s opioid epidemic (e.g., HRSA, 2018). In Texas, the Hogg Foundation has provided training grants for training future BHCs. Finally, the counseling profession must continue advocacy efforts toward establishing licensed counselors as Medicare providers. With this key change, licensed counselors would be more readily employable in medical settings (Dormond & Afayee, 2016).
Conclusion
Primary care has been the de facto mental health system in the United States for decades. Providing comprehensive primary care to patients is imperative, and in order to do this well, our workforce needs to be equipped to meet the growing behavioral health needs where patients show up to receive care. Given clinical measures such as successful patient outcomes and CACREP accreditation standards targeting integrated health care knowledge, it behooves counselor training programs to consider developing models for BHC training. This article presents the key aspects of the PITCH program in the hopes that our model will be useful to other counselor education programs as the profession moves toward integrated practice models in order to meet the ever-changing needs of the health care landscape.
Conflict of Interest and Funding Disclosure
PITCH is funded by a Behavioral Health Workforce Education
and Training grant from the Health Resources and Services
Administration. There is no known conflict of interest.
References
Berwick, D. M., Nolan, T. W., & Whittington, J. (2008). The triple aim: Care, health, and cost. Health Affairs, 27(3), 759–769. https://doi.org/10.1377/hlthaff.27.3.759
Canada, K. E., Freese, R., & Stone, M. (2018). Integrative behavioral health clinic: A model for social work practice, community engagement, and in vivo learning. Journal of Social Work Education, 54(3), 464–479. https://doi.org/10.1080/10437797.2018.1434442
Center for Health Workforce Studies. (2016). Rapid cycle quality improvement resource guide. http://www.healthworkforceta.org/wp-content/uploads/2016/06/RCQI_Resource_Guide.pdf
Council for Accreditation of Counseling and Related Educational Programs. (2016). CACREP accreditation manual.
Cunningham, P. J. (2009). Beyond parity: Primary care physicians’ perspectives on access to mental health care. Health Affairs, 28(Suppl. 1), 490–501. https://doi.org/10.1377/hlthaff.28.3.w490
Davis, M. J., Moore, K. M., Meyers, K., Mathews, J., & Zerth, E. O. (2016). Engagement in mental health treatment following primary care mental health integration contact. Psychological Services, 13(4), 333–340. http://doi.org/10.1037/ser0000089
deGruy, F. V. (2015). Integrated care: Tools, maps, and leadership. The Journal of the American Board of Family Medicine, 28(Suppl. 1), S107–S110. https://doi.org/10.3122/jabfm.2015.S1.150106
de Oliveira, C., Cheng, J., Vigod, S., Rehm, J., & Kurdyak, P. (2016). Patients with high mental health costs incur over 30 percent more costs than other high-cost patients. Health Affairs, 35(1), 36–43.
https://doi.org/10.1377/hlthaff.2015.0278
Dobmeyer, A. C., Rowan, A. B., Etherage, J. R., & Wilson, R. J. (2003). Training psychology interns in primary behavioral health care. Professional Psychology: Research and Practice, 34(6), 586–594.
https://doi.org/10.1037/0735-7028.34.6.586
Dormond, M., & Afayee, S. (2016, November). Understanding billing restrictions for behavioral health providers.
Behavioral Health Workforce Research Center, University of Michigan. http://www.behavioralhealth
workforce.org/wp-content/uploads/2017/01/FA3P4_Billing-Restrictions_Full-Report.pdf
Funderburk, J. S., Dobmeyer, A. C., Hunter, C. L., Walsh, C. O., & Maisto, S. A. (2013). Provider practices in the
primary care behavioral health (PCBH) model: An initial examination in the Veterans Health Administration and United States Air Force. Families, Systems, & Health, 31(4), 341–353. https://doi.org/10.1037/a0032770
Hall, J., Cohen, D. J., Davis, M., Gunn, R., Blount, A., Pollack, D. A., Miller, W. L., Smith, C., Valentine, N., & Miller, B. F. (2015). Preparing the workforce for behavioral health and primary care integration. The Journal of the American Board of Family Medicine, 28(Suppl. 1), S41–S51. https://doi.org/10.3122/jabfm.2015.S1.150054
Health Collaborative. (2013). 2013 Bexar county community health assessment report. https://iims.uthscsa.edu/sites/iims/files/Newsletters/bexar%20CHA%202013%20final.pdf
Health Resources and Services Administration. (2017). Behavioral health workforce education and training (BHWET) program. https://www.hrsa.gov/grants/find-funding/hrsa-17-070
Health Resources and Services Administration. (2018). FY 2018 expanding access to quality substance use disorder and mental health services (SUD-MH) supplemental funding technical assistance (HRSA-18-118). https://bphc.hrsa.gov/programopportunities/fundingopportunities/sud-mh/
Heath, B., Wise, R. P., & Reynolds, K. (2013). A standard framework for levels of integrated healthcare. SAMHSA-HRSA Center for Integrated Health Solutions. https://www.pcpcc.org/sites/default/files/resources/SAMHSA-HRSA%202013%20Framework%20for%20Levels%20of%20Integrated%20Healthcare.pdf
Hooper, L. (2014). Mental health services in primary care: Implications for clinical mental health counselors and other mental health providers. Journal of Mental Health Counseling, 36(2), 95–98.
https://doi.org/10.17744/mehc.36.2.u756l3l075354625
Hunter, C. L., Funderburk, J. S., Polaha, J., Bauman, D., Goodie, J. L., & Hunter, C. M. (2018). Primary Care Behavioral Health (PCBH) model research: Current state of the science and a call to action. Journal of Clinical Psychology in Medical Settings, 25(2), 127–156. https://doi.org/10.1007/s10880-017-9512-0
Hunter, C. L., & Goodie, J. L. (2010). Operational and clinical components for integrated-collaborative behavioral healthcare in the patient-centered medical home. Families, Systems, & Health, 28(4), 308–321. https://doi.org/10.1037/a0021761
Kwan, B. M., Valeras, A. B., Levey, S. B., Nease, D. E., & Talen, M. E. (2015). An evidence roadmap for implementation of integrated behavioral health under the Affordable Care Act. AIMS Public Health, 2(4), 691–717. https://doi.org/10.3934/publichealth.2015.4.691
Landis, S. E., Barrett, M., & Galvin, S. L. (2013). Effects of different models of integrated collaborative care in a
family medicine residency program. Families, Systems, & Health, 31(3), 264–273.
https://doi.org/10.1037/a0033410
McGough, P. M., Bauer, A. M., Collins, L., & Dugdale, D. C. (2016). Integrating behavioral health into primary care. Population Health Management, 19(2), 81–87. https://doi.org/10.1089/pop.2015.0039
Meadows, T., Valleley, R., Haack, M. K., Thorson, R., & Evans, J. (2011). Physician “costs” in providing behavioral health in primary care. Clinical Pediatrics, 50(5), 447–455. https://doi.org/10.1177/0009922810390676
Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Handbook of practical program evaluation (4th ed.). Jossey-Bass.
Ogbeide, S. A., Landoll, R. R., Nielsen, M. K., & Kanzler, K. E. (2018). To go or not go: Patient preference in seeking specialty mental health versus behavioral consultation within the Primary Care Behavioral Health Consultation Model. Families, Systems, & Health, 36(4), 513–517. https://doi.org/10.1037/fsh0000374
Olfson, M., Kroenke, K., Wang, S., & Blanco, C. (2014). Trends in office-based mental health care provided by psychiatrists and primary care physicians. The Journal of Clinical Psychiatry, 75(3), 247–253.
https://doi.org/10.4088/JCP.13m08834
Peek, C. J., & the National Integration Academy Council. (2013). Lexicon for behavioral health and primary care integration: Concepts and definitions developed by expert consensus. AHRQ Publication No.13-IP001-EF. Agency for Healthcare Research and Quality. https://integrationacademy.ahrq.gov/sites/default/files/Lexicon.pdf
Ray-Sannerud, B. N., Dolan, D. C., Morrow, C. E., Corso, K. A., Kanzler, K. E., Corso, M. L., Bryan, C. J. (2012). Longitudinal outcomes after brief behavioral health intervention in an integrated primary care clinic. Families, Systems, & Health, 30(1), 60–71. https://doi.org/10.1037/a0027029
Reiter, J. T., Dobmeyer, A. C., & Hunter, C. L. (2018). The Primary Care Behavioral Health (PCBH) model: An overview and operational definition. Journal of Clinical Psychology in Medical Settings, 25(2), 109–126. https://doi.org/10.1007/s10880-017-9531-x
Robinson, P. (2015). PCBH Brief Intervention Competency Assessment Tool (BI-CAT). Mountainview Consulting. http://www.coping.us/images/Robinson_2013_PCBH_BI-CAT.pdf
Robinson, P., Oyemaja, J., Beachy, B., Goodie, J., Sprague, L., Bell, J., Maples, M., & Ward, C. (2018). Creating a primary care workforce: Strategies for leaders, clinicians, and nurses. Journal of Clinical Psychology in Medical Settings, 25(2), 169–186. https://doi.org/10.1007/s10880-017-9530-y
Robinson, P. J., & Reiter, J. T. (2016). Behavioral consultation and primary care: A guide to integrating services (2nd ed.). Springer.
Sampson, M. (2017). Teaching note—Meeting the demand for behavioral health clinicians: Innovative training through the GLOBE project. Journal of Social Work Education, 53(4), 744–750.
https://doi.org/10.1080/10437797.2017.1287024
Serrano, N., Cordes, C., Cubic, B., & Daub, S. (2018). The state and future of the Primary Care Behavioral Health model of service delivery workforce. Journal of Clinical Psychology in Medical Settings, 25(2), 157–168. https://doi.org/10.1007/s10880-017-9491-1
Sheperis, D. S., & Sheperis, C. J. (2015). Clinical mental health counseling: Fundamentals of applied practice (1st ed.). Pearson.
Speroff, T., & O’Connor, G. T. (2004). Study designs for PDSA quality improvement research. Quality Management in Healthcare, 13(1), 17–32. http://innovationlabs.com/r3p_public/rtr3/pre/pre-read/PDSA%20QI%20Research.
Speroff.2004.pdf
Substance Abuse and Mental Health Services Administration. (2015). Behavioral health barometer. https://www.samhsa.gov/data/sites/default/files/2015_National_Barometer.pdf
Substance Abuse and Mental Health Services Administration. (2016). 2016 national survey of drug use and health. http://www.samhsa.gov/data/release/2016-national-survey-drug-use-and-health-nsduh-releases
Tang, M., Addison, K. D., LaSure-Bryant, D., Norman, R., O’Connell, W., & Stewart-Sicking, J. A. (2004). Factors that influence self-efficacy of counseling students: An exploratory study. Counselor Education and Supervision, 44(1), 70–80. https://doi.org/10.1002/j.1556-6978.2004.tb01861.x
U.S. Department of Health and Human Services. (2016). National projections of supply and demand for selected behavioral health practitioners: 2013-2025. https://bhw.hrsa.gov/sites/default/files/bhw/health-workforce-analysis/research/projections/behavioral-health2013-2025.pdf
Viron, M. J., & Stern, T. A. (2010). The impact of serious mental illness on health and healthcare. Psychosomatics, 51(6), 458–465. https://doi.org/10.1016/S0033-3182(10)70737-4
Vogel, M., Malcore, S., Illes, R., & Kirkpatrick, H. (2014). Integrated primary care: Why you should care and how to get started. Journal of Mental Health Counseling, 36(2), 130–144.
https://doi.org/10.17744/mehc.36.2.5312041n10767k51
Wagner, E. H., Glasgow, R. E., Davis, C., Bonomi, A. E., Provost, L., McCulloch, D., Carver, P., & Sixta, C. (2001). Quality improvement in chronic illness care: A collaborative approach. The Joint Commission Journal on Quality Improvement, 27(2), 63–80. https://doi.org/10.1016/S1070-3241(01)27007-2
Wammes, J. J. G., van der Wees, P. J., Tanke, M. A. C., Westert, G. P., & Jeurissen, P. P. T. (2018). Systematic review of high-cost patients’ characteristics and healthcare utilisation. BMJ Open, 8(9), 1–17.
https://doi.org/10.1136/bmjopen-2018-023113
Waxmonsky, J., Auxier, A., Wise-Romero, P., & Heath, B. (2013). Integrated Practice Assessment Tool (IPAT). http://ipat.valueoptions.com/IPAT/
Jessica Lloyd-Hazlett, PhD, NCC, LPC, is an associate professor at the University of Texas at San Antonio. Cory Knight, MS, is a master’s student at the University of Texas at San Antonio. Stacy Ogbeide, PsyD, ABPP, is a behavioral health consultant, licensed psychologist, and associate professor at the University of Texas Health Sciences Center San Antonio. Heather Trepal, PhD, LPC-S, is a professor and coordinator of the Clinical Mental Health Counseling Program at the University of Texas at San Antonio. Noel Blessing, MS, is a doctoral student at the University of Texas at San Antonio. Correspondence may be addressed to Jessica Lloyd-Hazlett, 501 W. Cesar E. Chavez Blvd., DB 4.132, San Antonio, TX 78207, Jessica.lloyd-hazlett@utsa.edu.
May 20, 2016 | Article, Volume 6 - Issue 2
Melissa J. Fickling
Advocacy with and on behalf of clients is a major way in which counselors fulfill their core professional value of promoting social justice. Career counselors have a unique vantage point regarding social justice due to the economic and social nature of work and can offer useful insights. Q methodology is a mixed methodology that was used to capture the perspectives of 19 career counselors regarding the relative importance of advocacy interventions. A two-factor solution was reached that accounted for 60% of the variance in perspectives on advocacy behaviors. One factor, labeled focus on clients, emphasized the importance of empowering individual clients and teaching self-advocacy. Another factor, labeled focus on multiple roles, highlighted the variety of skills and interventions career counselors use in their work. Interview data revealed that participants desired additional conversations and counselor training concerning advocacy.
Keywords: social justice, advocacy, career counselors, Q methodology, counselor training
The terms advocacy and social justice often are used without clear distinction. Advocacy is the active component of a social justice paradigm. It is a direct intervention or action and is the primary expression of social justice work (Fickling & Gonzalez, 2016; Ratts, Lewis, & Toporek, 2010; Toporek, Lewis, & Crethar, 2009). Despite the fact that counselors have more tools than ever to help them develop advocacy and social justice competence, such as the ACA Advocacy Competencies (Lewis, Arnold, House, & Toporek, 2002) and the Multicultural and Social Justice Counseling Competencies (Ratts, Singh, Nassar-McMillan, Butler, & McCullough, 2015), little is known about practitioners’ perspectives on the use of advocacy interventions.
One life domain in which social inequity can be vividly observed is that of work. The economic recession that began in 2007 has had a lasting impact on the labor market in the United States. Long-term unemployment is still worse than before the recession (Bureau of Labor Statistics, U.S. Department of Labor, 2016a). Further, in the United States, racial bias appears to impact workers and job seekers, as evidenced in part by the fact that the unemployment rate for Black workers is consistently about double that of White workers (e.g., 4.1% White unemployment and 8.2% Black unemployment as of May 2016; Bureau of Labor Statistics, U.S. Department of Labor, 2016b). Recent meta-analyses indicate that unemployment has a direct and causal negative impact on mental health, leading to greater rates of depression and suicide (Milner, Page, & LaMontagne, 2013; Paul & Moser, 2009). Clearly, the worker role is one that carries significant meaning and consequences for people who work or want to work (Blustein, 2006).
The rate at which the work world continues to change has led some to argue that worker adaptability is a key 21st century skill (Niles, Amundson, & Neault, 2010; Savickas, 1997), but encouraging clients to adapt to unjust conditions without also acknowledging the role of unequal social structures is inconsistent with a social justice paradigm (Stead & Perry, 2012). Career counselors, particularly those who work with the long-term unemployed and underemployed, witness the economic and psychological impact of unfair social arrangements on individuals, families and communities. In turn, they have a unique vantage point when it comes to social justice and a significant platform from which to advocate (Chope, 2010; Herr & Niles, 1998; Pope, Briddick, & Wilson, 2013; Pope & Pangelinan, 2010; Prilleltensky & Stead, 2012).
It appears that although career counselors value social justice and are aware of the effects of injustice on clients’ lives, they are acting primarily at the individual rather than the systemic level (Cook, Heppner, & O’Brien, 2005; McMahon, Arthur, & Collins, 2008b; Prilleltensky & Stead, 2012; Sampson, Dozier, & Colvin, 2011). Some research has emerged that focuses on practitioners’ use of advocacy in counseling practice (Arthur, Collins, Marshall, & McMahon, 2013; Arthur, Collins, McMahon, & Marshall, 2009; McMahon et al., 2008b; Singh, Urbano, Haston, & McMahan, 2010). Overall, this research indicates that advocacy is challenging and multifaceted and is viewed as a central component of good counseling work; however, more research is needed if we are to fully understand how valuing social justice translates to use of advocacy interventions in career counseling practice. This study aims to fill this theory–practice gap by illuminating the perceptions of advocacy behaviors from career counselors as they reflect upon their own counseling work.
Methodology
Through the use of Q methodology, insight into the decisions, motivations and thought processes of participants can be obtained by capturing their subjective points of view. When considering whether to undertake a Q study, Watts and Stenner (2012) encouraged researchers to consider whether revealing what a population thinks about an issue really matters and can make a real difference. Given the ongoing inequality in the labor market, increased attention and energy around matters of social justice in the counseling profession, the lack of knowledge regarding practitioners’ points of view on advocacy, and career counselors’ proximity to social and economic concerns of clients, the answer for the present study is most certainly yes.
Q methodology is fundamentally different from other quantitative research methodologies in the social sciences. It uses both quantitative and qualitative data to construct narratives of distinct perspectives. The term Q was coined to distinguish this methodology from R; Q measures correlations between persons, whereas R measures trait correlations (Brown, 1980). Rather than subjecting a sample of research participants to a collection of measures as in R methodology, Q methodology subjects a sample of items (i.e., the Q sample) to measurement by a collection of individuals through a ranking procedure known as the Q sort (see Figure 1; Watts & Stenner, 2012). Individuals are the variables in Q methodology, and factor analysis is used to reduce the number of points of view into a smaller number of shared perspectives. Then interviews are conducted to allow participants to provide additional data regarding their rankings of the Q sample items. In this study, career counselors were asked to sort a set of advocacy behaviors according to how important they were to their everyday practice of career counseling. Importance to practice was used as the measure of psychological significance since career counselors’ perspectives on advocacy interventions were of interest, rather than self-reported frequency or competence, for example.
Q Sample
The Q sample can be considered the instrumentation in Q methodology. The Q sample is a subset of statements drawn from the concourse of communication, which is defined as the entire population of statements about any given topic (McKeown & Thomas, 2013). The goal when creating the Q sample is to provide a comprehensive but manageable representation of the concourse from which it is taken. For this study, the concourse was that of counselor advocacy behaviors.
The Q sampling approach used for this study was indirect, naturalistic and structured-inductive. Researchers should draw their Q sample from a population of 100 to 300 statements (Webler, Danielson, & Tuler, 2009). For this study, I compiled a list of 180 counselor social justice and advocacy behaviors from a variety of sources including the ACA Advocacy Competencies (Lewis et al., 2002), the Social Justice Advocacy Scale (SJAS; Dean, 2009), the National Career Development Association (NCDA) Minimum Competencies (2009), the Council for Accreditation of Counseling and Related Educational Programs (CACREP) Standards (2009), and key articles in counseling scholarly and trade publications.
Consistent with a structured-inductive sampling strategy, these 180 statements were analyzed to identify categories representing different kinds of advocacy behaviors. By removing duplicates and those items that were more aligned with awareness, knowledge or skill rather than behavior, I was able to narrow the list from 180 to 43 statements. These statements were sorted into five domains that were aligned with the four scales of the SJAS (Dean, 2009) and a fifth added domain. The final domains were: Client Empowerment, Collaborative Action, Community Advocacy, Social/Political Advocacy, and Advocacy with Other Professionals. Aligning the Q sample with existing domains was appropriate since advocacy had been previously operationalized in the counseling literature.
Expert reviewers were used to check for researcher bias in the construction of the Q sample, including the addition of the fifth advocacy domain. Three expert reviewers who were faculty members and published on the topic of social justice in career counseling were asked to review the potential Q sample for breadth, coverage, omissions, clarity of phrasing and the appropriateness of the five domains of advocacy. Two agreed to participate and offered their feedback via a Qualtrics survey, leading to a refined Q sample of 25 counselor advocacy behaviors (see Table 1). Five statements were retained in each of the five domains. Finally, the Q sample and Q sorting procedure were piloted with two career counselors, leading to changes in instructions but not in the Q sample itself. Pilot data were not used in the final analysis.
Participants
In Q methodology, participant sampling should be theoretical and include the intentional selection of participants who are likely to have an opinion about the topic of interest (McKeown & Thomas, 2013; Watts & Stenner, 2012). It also is important to invite participants who represent a range of viewpoints and who are demographically diverse. For the current study, the following criteria were required for participant inclusion: (a) holds a master’s degree or higher in counseling and (b) has worked as a career counselor for at least one year full-time in the past two years. For this study, career counselor was defined as having career- or work-related issues as the primary focus of counseling in at least half of the counselor’s case load. Regarding the number of participants in a Q study, emphasis is placed on having enough participants to establish the existence of particular viewpoints, not simply having a large sample since generalizability is not a goal of Q methodology (Brown, 1980). In Q methodology, it also is important to have fewer participants than Q sample items (Watts & Stenner, 2012; Webler et al., 2009).
Participants were recruited by theoretical sampling of my professional network of practitioners, and one participant was recruited through snowball sampling. Nineteen career counselors participated in the present study from six states in the Southeast, West and Midwest regions of the United States. The participant sample was 68% female (n = 13) and 32% male (n = 6); the sample was 84% White and included two Black participants and one multi-racial participant. One participant was an immigrant to the United States and was a non-native English speaker. The participant sample was 95% heterosexual with one participant identifying as gay. Sixty-three percent of participants worked in four-year institutions of higher education and one worked in a community college. Thirty-two percent (n = 6) provided career counseling in non-profit agencies. The average age was 43 (SD = 12) and the average number of years of post-master’s counseling experience was eight (SD = 7); ages ranged from 28 to 66, and years of post-master’s experience ranged from one and a half to 31 years.
Q Sorting Procedure
The Q sort is a method of data collection in which participants rank the Q sample statements according to a condition of instruction along a forced quasi-normal distribution (see Figure 1). There is no time limit to the sorting task and participants are able to move the statements around the distribution until they are satisfied with their final configuration. The function of the forced distribution is to encourage active decision making and comparison of the Q sample items to one another (Brown, 1980).
Figure 1
Sample Q Sort Distribution
The condition of instruction for this study was, “Sort the following counselor advocacy behaviors according to how important or unimportant they are to your career counseling work.” The two poles of the distribution were most important and most unimportant. Poles range from most to most so that the ends of the distribution represent the areas that hold the greatest degree of psychological significance to the participant, and the middle of the distribution represents items that hold relatively little meaning or are more neutral in importance (Watts & Stenner, 2012).
The Q sorts for this study were conducted both in person and via phone or video chat (i.e., Google Hangouts, Skype). Once informed consent was obtained, I facilitated the Q sorting procedure by reading the condition of instruction, observing the sorting process, and conducting the post-sort interview. Once each participant felt satisfied with his or her sort, the distribution of statements was recorded onto a response sheet for later data entry.
Post-Sort Interview
Immediately following the Q sort, I conducted a semistructured interview with each participant in order to gain a greater understanding of the meaning of the items and their placement, as well as his or her broader understanding of the topic at hand (Watts & Stenner, 2012). The information gathered during the interview is used when interpreting the final emergent factors. Items in the middle of the distribution are not neglected and are specifically asked about during the post-sort interview so that the researcher can gain an understanding of the entire Q sort for each participant. Although the interview data are crucial to a complete and rigorous factor interpretation and should be conducted with every participant in every Q study, the data analysis process is guided by the quantitative criteria for factor analysis and factor extraction. The qualitative interview data, as well as the demographic data, are meant to help the researcher better understand the results of the quantitative analysis.
Data Analysis
Data were entered into the PQMethod program (Schmolck, 2014) and Pearson product moment correlations were calculated for each set of Q sorts. Inspection of the correlation matrix revealed that all sorts (i.e., all participants) were positively correlated with one another, some of them significantly so. This indicated a high degree of consensus among the participants regarding the role of advocacy in career counseling, which was further explored through factor analysis.
I used centroid factor analysis and Watts and Stenner’s (2012) recommendation of beginning by extracting one factor for every six Q sorts. Centroid factor analysis is the method of choice among Q methodologists because it allows for a fuller exploration of the data than a principal components analysis (McKeown & Thomas, 2013; Watts & Stenner, 2012). Next, I calculated the significance level at p < .01, which was .516 for this 25-item Q sample.
The unrotated factor matrix revealed two factors with Eigenvalues near or above the commonly accepted cutoff of 1 according to the Kaiser-Guttman rule (Kaiser, 1970). Brown (1978) argued that although Eigenvalues often indicate factor strength or importance, they should not solely guide factor extraction in Q methodology since “the significance of Q factors is not defined objectively (i.e., statistically), but theoretically in terms of the social-psychological situation to which the emergent factors are functionally related” (p. 118). Since there currently is little empirical evidence of differing perspectives on advocacy among career counselors, two factors were retained for rotation.
In order to gain another perspective on the data, I used the Varimax procedure. I flagged those sorts that loaded significantly (i.e., at or above 0.516) onto only one factor after rotation. Four participants (2, 8, 9 and 17) loaded significantly onto both rotated factors and were therefore dropped from the study and excluded from further analysis (Brown, 1980; Watts & Stenner, 2012). Two rotated factors were retained, which accounted for 60% of the variance in perspectives on advocacy behaviors. Fifteen of the original 19 participants were retained in this factor solution.
Q methodology uses only orthogonal rotation techniques, meaning that all factors are zero-correlated. Even so, it is possible for factors to be significantly correlated but still justify retaining separate factors (Watts & Stenner, 2012). The two factors in this study are correlated at 0.71. This correlation indicates that the perspectives expressed by the two factor arrays share a point of view but are still distinguishable and worthy of exploration as long as the general degree of consensus is kept in mind (Watts & Stenner, 2012).
Constructing Factor Arrays
After the two rotated factors were identified, factor arrays were constructed in PQMethod. A factor array is a composite Q sort and the best possible estimate of the factor’s viewpoint using the 25 Q sample items. First, a factor weight was calculated for each of the 15 Q sorts that loaded onto a factor. Next, normalized factor scores (z scores) were calculated for each statement on each factor, which were finally converted into factor arrays (see Table 1). In Q methodology, unlike traditional factor analysis, attention is focused more on factor scores than factor loadings. Since factor scores are based on weighted averages, Q sorts with higher factor loadings contribute proportionally more to the final factor score for each item in a factor than those with relatively low factor loadings. Finally, factors were named by examining the distinguishing statements and interview data of participants that loaded onto the respective factors. Factor one was labeled focus on clients and factor two was labeled focus on multiple roles.
Factor Characteristics
Factor one was labeled focus on clients and accounted for 32% of the variance in perspectives on advocacy behaviors. It included nine participants. The demographic breakdown on this factor was: six females, three males; eight White individuals and one person who identified as multi-racial. The average age on this factor was about 51 (SD = 10.33), ranging from 37 to 66. Persons on this factor had on average 11 years of post-master’s counseling experience (SD = 8.6), ranging from one and a half to 31 years. Fifty-six percent of participants on this factor worked in 4-year colleges or universities, 33% in non-profit agencies, and one person worked at a community college.
Factor two was labeled focus on multiple roles and accounted for 28% of the variance in career counselors’ perspectives on advocacy behaviors. It included six participants. Five participants on this factor identified as female and one identified as male. Five persons were White; one was Black. The average age of participants on this factor was almost 35 (SD = 6.79), ranging from 29 to 48, and they had an average of just over seven years of post-master’s experience (SD = 3.76), ranging from three and a half to 14 years. Four worked in higher education, and two worked in non-profit settings.
Factor Interpretation
In the factor interpretation phase of data analysis, the researcher constructs a narrative for each factor by incorporating post-sort interview data with the factor arrays to communicate the rich point of view of each factor (Watts & Stenner, 2012). Each participant’s interview was considered only in conjunction with the other participants on the factor on which they loaded. I read post-sort interview transcripts, looking for shared perspectives and meaning, in order to understand each factor array and enrich each factor beyond the statements of the Q sample. Thus, the results are reported below in narrative form, incorporating direct quotes and paraphrased summaries from interview data, but structured around the corresponding factor arrays.
Table 1
Q Sample Statements, Factor Scores and Q Sort Values
No
|
Statement
|
Factor 1
|
Factor 2
|
|
|
Factor Score
|
QSV
|
Factor Score
|
QSV
|
1 |
Question intervention practices that appear inappropriate. |
0.09
|
1
|
0.54
|
1
|
2 |
Seek feedback regarding others’ perceptions of my advocacy efforts. |
-0.85
|
-2
|
-0.75
|
-1
|
3 |
Serve as a mediator between clients and institutions. |
-0.47
|
-1
|
-1.05
|
-2
|
4 |
Express views on proposed bills that will impact clients. |
-0.97
|
-2
|
-1.96
|
-4
|
5 |
Maintain open dialogue to ensure that advocacy efforts are consistent with group goals. |
-0.19
|
0
|
-0.05
|
0
|
6 |
Encourage clients to research the laws and policies that apply to them. |
-0.31
|
0
|
0.15
|
0
|
7 |
Collect data to show the need for change in institutions. |
-0.67
|
-2
|
-0.75
|
-2
|
8 |
Educate other professionals about the unique needs of my clients. |
0.87
|
1
|
0.86
|
2
|
9 |
Help clients develop needed skills. |
1.67
|
3
|
0.42
|
1
|
10 |
Assist clients in carrying out action plans. |
-1.31
|
3
|
1.06
|
2
|
11 |
Help clients overcome internalized negative stereotypes. |
1.02
|
2
|
0.89
|
2
|
12 |
Conduct assessments that are inclusive of community members’ perspectives. |
-1.31
|
-3
|
0.5
|
1
|
13 |
With allies, prepare convincing rationales for social change. |
-0.35
|
-1
|
-1.36
|
-3
|
14 |
Identify strengths and resources of clients. |
2.17
|
4
|
1.62
|
3
|
15 |
Get out of the office to educate people about how and where to get help. |
0.58
|
1
|
-0.47
|
-1
|
16 |
Teach colleagues to recognize sources of bias within institutions and agencies. |
-0.37
|
-1
|
-0.37
|
-1
|
17 |
Deal with resistance to change at the community/system level. |
-0.43
|
-1
|
-0.21
|
0
|
18 |
Collaborate with other professionals who are involved in disseminating public information. |
-0.33
|
0
|
-0.4
|
-1
|
19 |
Help clients identify the external barriers that affect their development. |
1.08
|
2
|
1.46
|
3
|
20 |
Use multiple sources of intervention, such as individual counseling, social advocacy and case management. |
-0.32
|
0
|
1.73
|
4
|
21 |
Train other counselors to develop multicultural knowledge and skills. |
0.15
|
1
|
0.19
|
0
|
22 |
Work to ensure that clients have access to the resources necessary to meet their needs. |
1.03
|
2
|
0.85
|
1
|
23 |
Work to change legislation and policy that negatively affects clients. |
-1.78
|
-4
|
-1.39
|
-3
|
24 |
Ask other counselors to think about what social change is. |
-0.25
|
0
|
-0.22
|
0
|
25 |
Communicate with my legislators regarding social issues that impact my clients. |
-1.45
|
-3
|
-1.28
|
-2
|
Note. Q sort values are -4 to 4 to correspond with the Q distribution (Figure 1) where 4 is most important
and -4 is most unimportant; QSV = Q Sort Value.
Results
Factor 1: Focus on Clients
For participants on the focus on clients factor, the most important advocacy behavior was to “identify client strengths and resources” (see Table 1). When speaking about this item, participants often discussed teaching clients self-advocacy skills, stating that this is a key way in which career counselors promote social justice. Identifying client strengths and resources was referred to as “the starting point,” “the bottom line” and even the very “definition of career counseling.” One participant said that counseling is about “empowering our clients or jobseekers, whatever we call them, to do advocacy on their own behalf and to tell their story.” In general, persons on this factor were most concerned with empowering individual clients; for example, “I would say, even when we’re doing group counseling and family counseling, ultimately it’s about helping the person in the one-to-one.” Similarly, one participant said, “Instead of fighting for the group in legislation or out in the community, I’m working with each individual to help them better advocate for themselves.” Interview data indicated that social justice was a strongly held value for persons on this factor, but they typically emphasized the need for balancing their views on social injustice with their clients’ objectives; they wanted to take care not to prioritize their own agendas over those of their clients.
Several participants on this factor perceived items related to legislation or policy change as among the least client-centered behaviors and therefore as the more unimportant advocacy behaviors in their career counseling work. Persons on this factor stated that advocacy at the systems level was neither a strength of theirs nor a preference. A few reported that there are other people in their offices or campuses whose job is to focus on policy or legislative change. There also was a level of skepticism about counselors’ power to influence social change. In regard to influencing legislative change in support of clients, one participant said, “I don’t think in my lifetime that is going to happen. Maybe someday it will. I’m just thinking about market change right now instead of legislative change.”
Interview data revealed that career counselors on this factor thought about advocacy in terms of leadership, both positively and negatively. One person felt that a lack of leadership was a barrier to career counselors doing more advocacy work. Another person indicated that leaders were the ones who publicly called for social change and that this was neither his personality nor approach to making change, preferring instead to act at the micro level. Finally, persons on this factor expressed that conversations about social change or social justice were seen as potentially divisive in their work settings. One White participant said the following:
There is a reluctance to do social justice work because—and it’s mostly White people—people really don’t understand what it means, or feel like they don’t have a right to do that, or feel like they might be overstepping. Talking about race or anything else, people are really nervous and they don’t want to offend or say something that might be wrong, so as a result they just don’t engage on that level or on that topic.
Factor 2: Focus on Multiple Roles
One distinguishing feature of the focus on multiple roles factor was the relatively high importance placed on using multiple sources of intervention (see Table 1). Participants described this as being all-encompassing of what a career counselor does and reflective of the multiple roles a career counselor may hold. One participant said, “You never know what the client is going to come in with,” arguing that career counselors have to be open to multiple sources of intervention by necessity. Another participant indicated that she wished she could rely more on multiple sources of intervention but that the specialized nature of her office constricted her ability to do so.
Participants on this factor cited a lack of awareness or skills as a barrier to their implementing more advocacy behaviors. They were quick to identify social justice as a natural concern of career counselors and one that career counselors are well qualified to address due to their ability to remain aware of personal, mental health and career-related concerns simultaneously. One participant said:
I don’t know if the profession of career counseling is really seen as being as great as it is in that most of us have counseling backgrounds and can really tackle the issues of career on a number of different levels.
In talking about the nature of career counseling, another participant said, “Social justice impacts work in so many ways. It would make sense for those external barriers to come into our conversations.”
Regarding collaborating with other professionals to prepare convincing rationales for social change, one participant stated that there are already enough rationales for social change; therefore, this advocacy behavior was seen as less important to her. Persons on this factor placed relatively higher importance on valuing feedback on advocacy efforts than did participants on factor one. One participant said she would like to seek feedback more often but had not thought of doing so in a while: “I did this more when I was in graduate school because you are thinking about your thinking all the time. As a practitioner, as long as social justice and advocacy are on my radar, that’s good.”
Discussion
Neither setting nor gender appeared to differentiate the factors, but age and years of post-master’s experience may have been distinguishing variables. Younger individuals and those with fewer years of post-master’s experience tended to load onto factor two. Factor one had an average age of 51 compared to 35 for factor two, and the average age for all study participants was 43. It is interesting to note that the four participants who loaded onto both factors and were therefore dropped from analysis had an average of just over two years of post-master’s counseling experience versus 11 for factor one and seven for factor two. It is possible that their more recent training regarding advocacy may account for some differences in perspective from those of more experienced counselors.
Participants on factor one (focus on clients) who emphasized the importance of individual clients tended to perceive it as more difficult to have conversations about social justice with their peers or supervisors. In contrast, participants on factor two (focus on multiple roles) were more likely to cite a lack of knowledge or skills regarding their reasons for not engaging in more advocacy behaviors beyond the client level. Factor arrays indicated that factor one participants viewed engaging at the community level as more important, whereas participants on factor two viewed conversations with colleagues and clients about social justice as more important to their work.
The broader view of persons on factor two regarding the career counselor’s role and their openness to acknowledging their own lack of awareness or skills may reflect a different kind of socialization around advocacy compared to persons on factor one. Career counselors who graduated from counseling programs prior to the emphasis on multicultural competence in the early 1990s or before the inclusion of social justice in the literature and CACREP standards in the first decade of the 21st century may have had limited exposure to thinking about contextual or social factors that impact client wellness. Persons on both factors, however, expressed interest in social justice and felt that the vast majority of advocacy behaviors were important.
In post-sort interviews, participants from both factors described a gradual shift in emphasis from a focus on the individual on the right hand (most important) side of the Q sort distribution to an emphasis on legislation on the left hand (most unimportant) side. For example, the statement identify strengths and resources of clients was one of the most important behaviors for nearly every participant. Likewise, the statement work to change legislation and policy that negatively affects clients was ranked among the most unimportant advocacy behaviors for both factors. Interestingly, the statement encourage clients to research the laws and policies that apply to them was a consensus statement with a Q sort value of 0, or the very middle of the distribution. Since this advocacy behavior is both client focused and presumably would provide clients with important self-advocacy skills, it is interesting that it was ranked lower than other items related to client self-advocacy. Some participants indicated that they considered this item a “passive” counselor behavior in that they might encourage clients to research laws but could not or would not follow up with clients on this task. One participant said she would like to encourage clients to research laws that apply to them but shared that she would first need to learn more about the laws that impact her clients in order to feel effective in using this intervention.
Participants were asked directly about potential barriers to advocacy and potential strengths of career counselors in promoting social justice. Responses are discussed below. The questions about strengths and barriers in the post-sort interview did not reference Q sample items, so participant responses are reported together below.
Barriers to Promoting Social Justice
In the post-sort interviews, lack of time was mentioned by nearly every participant as a barrier to implementing more advocacy in career counseling, and it often came in the form of little institutional support for engaging in advocacy. For example, participants indicated that while their supervisors would not stop them from doing advocacy work, they would not provide material support (e.g., time off, reduced case load) to do so. This finding is consistent with other literature that suggests that career counselors report a lack of institutional support for engaging in advocacy (Arthur et al., 2009).
Another major barrier to advocacy was a lack of skill or confidence in one’s ability as an advocate. Advocacy at the social/political level requires a unique set of skills (M. A. Lee, Smith, & Henry, 2013), which practitioners in the present study may or may not have learned during their counseling training. Pieterse, Evans, Risner-Butner, Collins, and Mason (2009) reviewed 54 syllabi from required multicultural courses in American Psychological Association (APA)- and CACREP-accredited programs and found that awareness and knowledge tended to be emphasized more than skill building or application of social justice advocacy. This seems to have been reflected in the responses from many participants in the present study.
Participants on both factors indicated that they held some negative associations to advocacy work, calling it “flag waving” or “yelling and screaming” about inequality or social issues. They expressed some concern about how they might be perceived by their peers if they were to engage in advocacy; however, involvement in this study seemed to provide participants with a new understanding of advocacy as something that happens at the individual as well as at the social level. Participants appeared to finish the data collection sessions with a more positive understanding of what advocacy is and could be.
Strengths of Career Counselors in Promoting Social Justice
In addition to discussing barriers to advocacy, participants were asked directly about strengths of career counselors in promoting social justice and were able to identify many. First and foremost, participants saw the ability to develop one-on-one relationships with clients as a strength. One participant nicely captured the essence of all responses in this area by stating, “The key thing is our work one-on-one with an individual to say that even though you’re in a bad place, you have strengths, you have resources, and you have value.” Participants indicated that social change happens through a process of empowering clients, instilling hope and seeing diversity as a strength of a client’s career identity. The ability to develop strong counseling relationships was attributed partially to participants’ counseling training and identity, as well as to their exposure to a broad range of client concerns due to the inseparable nature of work from all other aspects of clients’ lives (Herr & Niles, 1998; Tang, 2003).
Career counselors in this study served diverse populations and highly valued doing so. These participants described multicultural counseling skills and experience as central to competent career counseling and to advocacy. They felt that they possessed and valued multicultural competence, which bodes well for their potential to engage in competent and ethical advocacy work with additional training, experience and supervision (Crook, Stenger, & Gesselman, 2015; Vespia, Fitzpatrick, Fouad, Kantamneni, & Chen, 2010).
Finally, participants felt that career counseling is seen as more accessible than mental health counseling to some clients, giving career counselors unique insight into clients’ social and personal worlds. Participants reported having a broad perspective on their clients’ lives and therefore unique opportunities to advocate for social justice. Relatedly, participants noted that the more concrete and tangible nature of career counseling and its outcomes (e.g., employment) may lead policymakers to be interested in hearing career counselors’ perspectives on social issues related to work. One participant noted that “there’s a huge conversation to be had around work and social justice” and that career counselors’ key strength “is empowering clients and the broader community to understand the role of work.”
Implications for Career Counselors, Counselor Educators, and Supervisors
Nearly all participants described the sorting process as thought provoking and indicated that social justice and advocacy were topics they appreciated the opportunity to think more about. There was a strong desire among some practitioners in this study to talk more openly with colleagues about social justice and its connection to career counseling, but a lingering hesitation as well. Therefore, one implication of the present study is that practitioners should begin to engage in discussions about this topic with colleagues and leaders in the profession. If there is a shared value for advocacy beyond the individual level, but time and skills are perceived as barriers, perhaps a larger conversation about the role of career counselors is timely. Career counselors may benefit from finding like-minded colleagues with whom to talk about social justice and advocacy. Support from peers may help practitioners strategize ways to question or challenge coworkers who may be practicing career counseling in ways that hinder social justice.
To move toward greater self-awareness and ethical advocacy, practitioners and career counseling leaders must ask themselves critical and self-reflexive questions about their roles and contributions in promoting social justice (McIlveen & Patton, 2006; Prilleltensky & Stead, 2012). Some authors have indicated there is an inherent tension in considering a social justice perspective and that starting such conversations can even lead to more questions than answers (Prilleltensky & Stead, 2012; Stead & Perry, 2012). Counselors should turn their communication skills and tolerance for ambiguity inward and toward one another in order to invite open and honest conversations about their role in promoting social justice for clients and communities. The participants in this study seem eager to do so, though leadership may be required to get the process started in a constructive and meaningful way.
Counselor educators and supervisors can provide counselors-in-training increased experience with systemic-level advocacy by integrating the ACA Advocacy Competencies and the Multicultural and Social Justice Counseling Competencies into all core coursework. Even though broaching issues of social justice has been reported as challenging and potentially risky, counselor educators should integrate such frameworks and competencies in active and experiential ways (Kiselica & Robinson, 2001; M. A. Lee et al., 2013; Lopez-Baez & Paylo, 2009; Manis, 2012). Singh and colleagues (2010) found that even self-identified social justice advocates struggled at times with initiating difficult conversations with colleagues; they argued that programs should do more to help counselors-in-training develop skills “to anticipate and address the inevitable interpersonal challenges inherent in advocacy work” (p. 141). Skills in leadership, teamwork and providing constructive feedback might be beneficial to prepare future counselors for addressing injustice. Furthermore, Crook and colleagues (2015) found that advocacy training via coursework or workshops is associated with higher levels of perceived advocacy competence among school counselors, lending more support in favor of multi-level training opportunities.
Limitations
The current study is one initial step in a much-needed body of research regarding advocacy practice in career counseling. It did not measure actual counselor engagement in advocacy, which is important to fully understand the current state of advocacy practice; rather, it measured perceived relative importance of advocacy behaviors. Researcher subjectivity may be considered a limitation of this study, as researcher decisions influenced the construction of the Q sample, the factor analysis and the interpretation of the emergent factors. By integrating feedback from two expert reviewers during construction of the Q sample, I minimized the potential for bias at the design stage. Factor interpretation is open to the researcher’s unique lens and also may be considered a limitation, but if it is done well, interpretation in Q methodology should be constrained by the factor array and interview data. Although generalizability is not a goal of Q methodology, the sample size in this study is small and therefore limits the scope of the findings.
Suggestions for Future Research and Conclusion
Advocacy is central to career counseling’s relevance in the 21st century (Arthur et al., 2009; Blustein, McWhirter, & Perry, 2005; McMahon, Arthur, & Collins, 2008a), yet due to the complexity and personal nature of this work, more research is required if we are to engage in advocacy competently, ethically and effectively. There appears to be interest among career counselors in gaining additional skills and knowledge regarding advocacy, so future research could include analyzing the effects of a training curriculum on perceptions of and engagement with advocacy. Outcome research could also be beneficial to understand whether career counselors who engage in high levels of advocacy report different client outcomes than those who do not. Finally, research with directors of career counseling departments could be helpful to understand what, if any, changes to career counselors’ roles are possible if career counselors are interested in doing more advocacy work. Understanding the perspectives of these leaders could help further the conversation regarding the ideals of social justice and the reality of expectations and demands faced by career counseling offices and agencies.
This research study is among the first to capture U.S. career counselors’ perspectives on a range of advocacy behaviors rather than attitudes about social justice in general. It adds empirical support to the notion that additional conversations and training around advocacy are wanted and needed among practicing career counselors. Stead (2013) wrote that knowledge becomes accepted through discourse; it is hoped that the knowledge this study produces will add to the social justice discourse in career counseling and move the profession toward a more integrated understanding of how career counselors view the advocate role and how they can work toward making social justice a reality.
Conflict of Interest and Funding Disclosure
The author conducted this research with the assistance of grants awarded by the National Career Development Association, the North Carolina Career Development Association, and the Southern Association for Counselor Education and Supervision.
References
American Counseling Association. (2014). 2014 ACA Code of Ethics. Alexandria, VA: Author. Retrieved from http://www.counseling.org/Resources/aca-code-of-ethics.pdf
Arthur, N., Collins, S., Marshall, C., & McMahon, M. (2013). Social justice competencies and career development practices. Canadian Journal of Counselling and Psychotherapy, 47, 136–154.
Arthur, N., Collins, S., McMahon, M., & Marshall, C. (2009). Career practitioners’ views of social justice and barriers for practice. Canadian Journal of Career Development, 8, 22–31.
Blustein, D. L. (2006). The psychology of working. Mahwah, NJ: Erlbaum.
Blustein, D. L., McWhirter, E. H., & Perry, J. C. (2005). An emancipatory communitarian approach to vocational development theory, research, and practice. The Counseling Psychologist, 33, 141–179. doi:10.1177/0011000004272268
Brown, S. R. (1978). The importance of factors in Q methodology: Statistical and theoretical considerations. Operant Subjectivity, 1, 117–124.
Brown, S. R. (1980). Political subjectivity: Applications of Q methodology in political science. New Haven, CT: Yale University Press.
Bureau of Labor Statistics, U.S. Department of Labor. (2016a). Charting the labor market: Data from the Current Population Survey (CPS). Retrieved from http://www.bls.gov/web/empsit/cps_charts.pdf
Bureau of Labor Statistics, U.S. Department of Labor. (2016b). The employment situation – May 2016. Retrieved from http://www.bls.gov/news.release/pdf/empsit.pdf
Chope, R. C. (2010). Applying the ACA advocacy competencies in employment counseling. In M. J. Ratts, R. L. Toporek, & J. A. Lewis (Eds.), ACA advocacy competencies: A social justice framework for counselors (pp. 225–236). Alexandria, VA: American Counseling Association.
Cook, E. P., Heppner, M. J., & O’Brien, K. M. (2005). Multicultural and gender influences in women’s career development: An ecological perspective. Journal of Multicultural Counseling and Development, 33, 165–179. doi:10.1002/j.2161-1912.2005.tb00014.x
Council for Accreditation of Counseling and Related Educational Programs. (2009). 2009 standards. Retrieved from http://www.cacrep.org/wp-content/uploads/2013/12/2009-Standards.pdf
Crook, T. M., Stenger, S., & Gesselman, A. (2015). Exploring perceptions of social justice advocacy competence among school counselors. Journal of Counselor Leadership and Advocacy, 2, 65–79.
Dean, J. K. (2009). Quantifying social justice advocacy competency: Development of the social justice advocacy scale
(Doctoral dissertation). Retrieved from http://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=10
39&context=cps_diss
Fickling, M. J., & Gonzalez, L. M. (2016). Linking multicultural counseling and social justice through advocacy. Journal of Counselor Leadership and Advocacy. doi:10.1080/2326716X.2015.1124814
Herr, E. L., & Niles, S. G. (1998). Career: Social action on behalf of purpose, productivity, and hope. In C. C. Lee & G. R. Walz (Eds.), Social action: A mandate for counselors (pp. 117–136). Alexandria, VA: American Counseling Association.
Kaiser, H. F. (1970). A second generation little jiffy. Psychometrika, 35, 401–415. doi:10.1007/BF02291817
Kiselica, M. S., & Robinson, M. (2001). Bringing advocacy counseling to life: The history, issues, and human dramas of social justice work in counseling. Journal of Counseling & Development, 79, 387–397. doi:10.1002/j.1556-6676.2001.tb01985.x
Lee, C. C., & Hipolito-Delgado, C. P. (2007). Introduction: Counselors as agents of social justice. In C. C. Lee
(Ed.), Counseling for social justice (2nd ed., pp. xiii–xxviii). Alexandria, VA: American Counseling Association.
Lee, M. A., Smith, T. J., & Henry, R. G. (2013). Power politics: Advocacy to activism in social justice counseling. Journal for Social Action in Counseling and Psychology, 5, 70–94. Retrieved from http://www.psysr.org/jsacp/Lee-V5n3-13_70-94.pdf
Lewis, J. A., Arnold, M. S., House, R., & Toporek, R. L. (2002). ACA advocacy competencies. American Counseling Association. Retrieved from https://www.counseling.org/docs/default-source/competencies/advocacy_competencies.pdf?sfvrsn=9
Lopez-Baez, S. I., & Paylo, M. J. (2009). Social justice advocacy: Community collaboration and systems advocacy. Journal of Counseling & Development, 87, 276–283. doi:10.1002/j.1556-6678.2009.tb00107.x
Manis, A. A. (2012). A review of the literature on promoting cultural competence and social justice agency among students and counselor trainees: Piecing the evidence together to advance pedagogy and research. The Professional Counselor, 2, 48–57. doi:10.15241/aam.2.1.48
McIlveen, P., & Patton, W. (2006). A critical reflection on career development. International Journal for Educational and Vocational Guidance, 6, 15–27. doi:10.1007/s10775-006-0005-1
McKeown, B., & Thomas, D. B. (2013). Q methodology (2nd ed.). Thousand Oaks, CA: Sage.
McMahon, M., Arthur, N., & Collins, S. (2008a). Social justice and career development: Looking back, looking forward. Australian Journal of Career Development, 17, 21–29. doi:10.1177/103841620801700205
McMahon, M., Arthur, N., & Collins, S. (2008b). Social justice and career development: Views and experiences of Australian career development practitioners. Australian Journal of Career Development, 17(3), 15–25. doi:10.1177/103841620801700305
Milner, A., Page, A., & LaMontagne, A. D. (2013). Long-term unemployment and suicide: a systematic review and meta-analysis. PLoS ONE, 8, e51333. doi:10.1371/journal.pone.0051333
National Career Development Association. (2009). Minimum competencies for multicultural career counseling and development. Retrieved from http://ncda.org/aws/NCDA/pt/sp/guidelines
Niles, S. G., Amundson, N. E., & Neault, R. A. (2010). Career flow: A hope-centered approach to career development. Boston, MA: Pearson Education.
Paul, K. I., & Moser, K. (2009). Unemployment impairs mental health: Meta-analyses. Journal of Vocational Behavior, 74, 264–282. doi:10.1016/j.jvb.2009.01.001
Pieterse, A. L., Evans, S. A., Risner-Butner, A., Collins, N. M., & Mason, L. B. (2009). Multicultural competence and social justice training in counseling psychology and counselor education: A review and analysis of a sample of multicultural course syllabi. The Counseling Psychologist, 37, 93–115. doi:10.1177/0011000008319986
Pope, M., Briddick, W. C., & Wilson, F. (2013). The historical importance of social justice in the founding of the National Career Development Association. The Career Development Quarterly, 61, 368–373.
doi:10.1002/j.2161-0045.2013.00063.x
Pope, M., & Pangelinan, J. S. (2010). Using the ACA advocacy competencies in career counseling. In M. J. Ratts, R. L. Toporek, & J. A. Lewis (Eds.), ACA advocacy competencies: A social justice framework for counselors (pp. 209–223). Alexandria, VA: American Counseling Association.
Prilleltensky, I., & Stead, G. B. (2012). Critical psychology and career development: Unpacking the adjust–challenge dilemma. Journal of Career Development, 39, 321–340. doi:10.1177/0894845310384403
Ratts, M. J., Lewis, J. A., & Toporek, R. L. (2010). Advocacy and social justice: A helping paradigm for the 21st century. In M. J. Ratts, R. L. Toporek, & J. A. Lewis (Eds.), ACA advocacy competencies: A social justice framework for counselors (pp. 3–10). Alexandria, VA: American Counseling Association.
Ratts, M. J., Singh, A. A., Nassar-McMillan, S., Butler, S. K., & McCullough, J. R. (2015). Multicultural and social justice counseling competencies. Retrieved from http://www.counseling.org/knowledge-center/competencies
Sampson, J. P., Jr., Dozier, V. C., & Colvin, G. P. (2011). Translating career theory to practice: The risk of unintentional social injustice. Journal of Counseling & Development, 89, 326–337.
doi:10.1002/j.1556-6678.2011.tb00097.x
Savickas, M. L. (1997). Career adaptability: An integrative construct for life-span, life-space theory. The Career Development Quarterly, 45, 247–259. doi:10.1002/j.2161-0045.1997.tb00469.x
Schmolck, P. (2014). PQMethod 2.35 with PQROT 2.0 [Computer software and manual]. Retrieved from
http://schmolck.userweb.mwn.de/qmethod
Singh, A. A., Urbano, A., Haston, M., & McMahan, E. (2010). School counselors’ strategies for social justice change: A grounded theory of what works in the real world. Professional School Counseling, 13, 135–145. doi:10.5330/PSC.n.2010-13.135
Stead, G. S. (2013). Social constructionist thought and working. In D. L. Blustein (Ed.), The Oxford handbook of the psychology of working (pp. 37–48). New York, NY: Oxford University Press.
Stead, G. B., & Perry, J. C. (2012). Toward critical psychology perspectives of work-based transitions. Journal of Career Development, 39, 315–320. doi:10.1177/0894845311405661
Tang, M. (2003). Career counseling in the future: Constructing, collaborating, advocating. The Career Development Quarterly, 52, 61–69. doi:10.1002/j.2161-0045.2003.tb00628.x
Toporek, R. L., Lewis, J. A., & Crethar, H. C. (2009). Promoting systemic change through the ACA advocacy competencies. Journal of Counseling & Development, 87, 260–268. doi:10.1002/j.1556-6678.2009.tb00105.x
Vespia, K. M., Fitzpatrick, M. E., Fouad, N. A., Kantamneni, N., & Chen, Y.-L. (2010). Multicultural career counseling: A national survey of competencies and practices. The Career Development Quarterly, 59, 54–71. doi:10.1002/j.2161-0045.2010.tb00130.x
Watts, S., & Stenner, P. (2012). Doing q methodological research: Theory, method and interpretation. Thousand Oaks, CA: Sage.
Webler, T., Danielson, S., & Tuler, S. (2009). Using Q method to reveal social perspectives in environmental research.
Greenfield, MA: Social and Environmental Research Institute. Retrieved from
http://seri-us.org/sites/default/files/Qprimer.pdf
Melissa J. Fickling, NCC, is an Assistant Professor at the University of Memphis. Correspondence can be addressed to Melissa J. Fickling, University of Memphis, Ball Hall 100, Memphis, TN 38152, mfckling@memphis.edu.
Mar 24, 2016 | Article, Volume 6 - Issue 1
Chad M. Yates, Courtney M. Holmes, Jane C. Coe Smith, Tiffany Nielson
Implementing continuous feedback loops between clients and counselors has been found to have significant impact on the effectiveness of counseling (Shimokawa, Lambert, & Smart, 2010). Feedback informed treatment (FIT) systems are beneficial to counselors and clients as they provide clinicians with a wide array of client information such as which clients are plateauing in treatment, deteriorating or at risk for dropping out (Lambert, 2010; Lambert, Hansen, & Finch, 2001). Access to this type of information is imperative because counselors have been shown to have poor predictive validity in determining if clients are deteriorating during the counseling process (Hannan et al., 2005). Furthermore, recent efforts by researchers show that FIT systems based inside university counseling centers have beneficial training features that positively impact the professional development of counseling students (Reese, Norsworthy, & Rowlands, 2009; Yates, 2012). To date, however, few resources exist on how to infuse FIT systems into counselor education curriculum and training programs.
This article addresses the current lack of information regarding the implementation of a FIT system within counselor education curricula by discussing: (1) an overview and implementation of a FIT system; (2) a comprehensive review of the psychometric properties of three main FIT systems; (3) benefits that the use of FIT systems hold for counselors-in-training; and (4) how the infusion of FIT systems within a counseling curriculum can help assess student learning outcomes.
Overview and Implementation of a FIT System
FIT systems are continual assessment procedures that include weekly feedback about a client’s current symptomology and perceptions of the therapeutic process in relation to previous counseling session scores. These systems also can include other information such as self-reported suicidal ideation, reported substance use, or other specific responses (e.g., current rating of depressive symptomology). FIT systems compare clients’ current session scores to previous session scores and provide a recovery trajectory, often graphed, that can help counselors track the progress made through the course of treatment (Lambert, 2010). Some examples of a FIT system include the Outcome Questionnaire (OQ-45.2; Lambert et al., 1996), Session Rating Scale (SRS; Miller, Duncan, & Johnson, 2000), Outcome Rating Scale (ORS; Miller & Duncan, 2000), and the Counseling Center Assessment of Psychological Symptoms (CCAPS; Locke et al., 2011), all of which are described in this article.
Variety exists regarding how FIT systems are used within the counseling field. These variations include the selected measure or test, frequency of measurement, type of feedback given to counselors and whether or not feedback is shared with clients on a routine basis. Although some deviations exist, all feedback systems contain consistent procedures that are commonly employed when utilizing a system during practice (Lambert, Hansen, & Harmon, 2010). The first procedure in a FIT system includes the routine measurement of a client’s symptomology or distress during each session. This frequency of once-per-session is important as it allows counselors to receive direct, continuous feedback on how the client is progressing or regressing throughout treatment. Research has demonstrated that counselors who receive regular client feedback have clients that stay in treatment longer (Shimokawa et al., 2010); thus, the feedback loop provided by a FIT system is crucial in supporting clients through the therapeutic process.
The second procedure of a FIT system includes showcasing the results of the client’s symptomology or distress level in a concise and usable way. Counselors who treat several clients benefit from accessible and comprehensive feedback forms. This ease of access is important because counselors may be more likely to buy in to the use of feedback systems if they can use them in a time-effective manner.
The last procedure of FIT systems includes the adjustment of counseling approaches based upon the results of the feedback. Although research in this area is limited, some studies have observed that feedback systems do alter the progression of treatment. Lambert (2010) suggested that receiving feedback on what is working is apt to positively influence a counselor to continue these behaviors. Yates (2012) found that continuous feedback sets benchmarks of performance for both the client and the counselor, which slowly alters treatment approaches. If the goal of counseling is to decrease symptomology or increase functioning, frequently observing objective progress toward these goals using a FIT system can help increase the potential for clients to achieve these goals through targeted intervention.
Description of Three FIT Systems
Several well-validated, reliable, repeated feedback instruments exist. These instruments vary by length and scope of assessment, but all are engineered to deliver routine feedback to counselors regarding client progress. Below is a review of three of the most common FIT systems utilized in clinical practice.
The OQ Measures System
The OQ Measures System uses the Outcome Questionnaire 45.2 (OQ-45.2; Lambert et al., 1996), a popular symptomology measure that gauges a client’s current distress levels over three domains: symptomatic distress, interpersonal relations and social roles. Hatfield and Ogles (2004) listed the OQ 45.2 as the third most frequently used self-report outcome measure for adults in the United States. The OQ 45.2 has 45 items and is rated on a 5-point Likert scale. Scores range between 0 and 180; higher scores suggest higher rates of disturbance. The OQ 45.2 takes approximately 5–6 minutes to complete and the results are analyzed using the OQ Analyst software provided by the test developers. The OQ 45.2 can be delivered by paper and pencil versions or computer assisted administration via laptop, kiosk, or personal digital assistant (PDA). Electronic administration of the OQ 45.2 allows for seamless administration, scoring and feedback to both counselor and client.
Internal consistency for the OQ 45.2 is α = 0.93 and test-retest reliability is r = 0.84. The OQ 45.2 demonstrated convergent validity with the General Severity Index (GSI) of the Symptom Checklist 90-Revised (SCL-90-R; Derogatis, 1983; r = .78, n = 115). The Outcome Questionnaire System has five additional outcome measures: (1) the Outcome Questionnaire 30 (OQ-30); (2) the Severe Outcome Questionnaire (SOQ), which captures outcome data for more severe presenting concerns, such as bipolar disorder and schizophrenia; (3) the Youth Outcome Questionnaire (YOQ), which assesses outcomes in children between 13 and 18 years of age; (4) the Youth Outcome Questionnaire 30, which is a brief version of the full YOQ; and (5) the Outcome Questionnaire 10 (OQ-10), which is used as a brief screening instrument for psychological symptoms (Lambert et al., 2010).
The Partners for Change Outcome Management System (PCOMS)
The Partners for Change Outcome Management System (PCOMS) uses two instruments, the Outcome Rating Scale (ORS; Miller & Duncan, 2000) that measures the client’s session outcome, and the Session Rating Scale (SRS; Miller et al., 2000) that measures the client’s perception of the therapeutic alliance. The ORS and SRS were designed to be brief in response to the heavy time demands placed upon counselors. Administration of the ORS includes handing the client a copy of the ORS on a sheet of letter sized paper; the client then draws a hash mark on four distinct 10-centimeter lines that indicate how he or she felt over the last week on the following scales: individually (personal well-being), interpersonally (family and close relationships), socially (work, school and friendships), and overall (general sense of well-being).
The administration of the SRS includes four similar 10-centimeter lines that evaluate the relationship between the client and counselor. The four lines represent relationship, goals and topics, approach or methods, and overall (the sense that the session went all right for me today; Miller et al., 2000). Scoring of both instruments includes measuring the location of the client’s hash mark and assigning a numerical value based on its location along the 10-centimeter line. Measurement flows from left to right, indicating higher-level responses the further right the hash mark is placed. A total score is computed by adding each subscale together. Total scores are graphed along a line plot. Miller and Duncan (2000) used the reliable change index formula (RCI) to establish a clinical cut-off score of 25 and a reliable change index score of 5 points for the ORS. The SRS has a cut-off score of 36, which suggests that total scores below 36 indicate ruptures in the working alliance.
The ORS demonstrated strong internal reliability estimates (α = 0.87-.096), a test-retest score of r = 0.60, and moderate convergent validity with measures like the OQ 45.2 (r = 0.59), which it was created to resemble (Miller & Duncan, 2000; Miller, Duncan, Brown, Sparks, & Claud, 2003). The SRS had an internal reliability estimate of α = 0.88, test-retest reliability of r = 0.74, and showed convergent validity when correlated with similar measures of the working alliance such as the Helping Alliance Questionnaire–II (HAQ–II; Duncan et al., 2003; Luborsky et al., 1996). The developers of the ORS and SRS have also created Web-based administration features that allow clients to use both instruments online using a pointer instead of a pencil or pen. The Web-based administration also calculates the totals for the instruments and graphs them.
The Counseling Center Assessment of Psychological Symptoms (CCAPS)
The CCAPS was designed as a semi-brief continuous measure that assesses symptomology unique to college-aged adults (Locke et al., 2011). When developed, the CCAPS was designed to be effective in assessing college students’ concerns across a diverse range of college campuses. The CCAPS has two separate versions, the CCAPS-62 and a shorter version, the CCAPS-34. The CCAPS-62 has 62 test items across eight subscales that measure: depression, generalized anxiety, social anxiety, academic distress, eating concerns, family distress, hostility and substance abuse. The CCAPS-34 has 34 test items across seven of the scales found on the CCAPS-62, excluding family distress. Additionally, the substance use scale on the CCAPS-62 is renamed the Alcohol Use Scale on the CCAPS-32 (Locke et al., 2011). Clients respond on a 5-point Likert scale with responses that range from not at all like me to extremely like me. On both measures clients are instructed to answer each question based upon their functioning over the last 2 weeks. The CCAPS measures include a total score scale titled the Distress Index that measures the amount of general distress experienced over the previous 2 weeks (Center for Collegiate Mental Health, 2012). The measures were designed so that repeated administration would allow counselors to compare each session’s scores to previous scores, and to a large norm group (N = 59,606) of clients completing the CCAPS at university counseling centers across the United States (Center for Collegiate Mental Health, 2012).
The CCAPS norming works by comparing clients’ scores to a percentile score of other clients who have taken the measure. For instance, a client’s score of 80 on the depressive symptoms scale indicates that he or she falls within the 80th percentile of the norm population’s depressive symptoms score range. Because the CCAPS measures utilize such a large norm base, the developers have integrated the instruments into the Titanium Schedule ™, an Electronic Medical Records (EMR) system. The developers also offer the instruments for use in an Excel scoring format, along with other counseling scheduling software programs. The developers of the CCAPS use RCI formulas to provide upward and downward arrows next to the reported score on each scale. Downward arrows indicate the client’s current score is significantly different than previous sessions’ scores and suggests progress during counseling. An upward arrow would suggest a worsening of symptomology. Cut-off scores vary across scales and can be referenced in the CCAPS 2012 Technical Manual (Center for Collegiate Mental Health, 2012).
Test-retest estimates at 2 weeks for the CCAPS-62 and CCAPS-34 scales range between r = 0.75–0.91 (Center for Collegiate Mental Health, 2012). The CCAPS-34 also demonstrated a good internal consistency that ranged between α = 0.76–0.89 (Locke et al., 2012). The measures also demonstrated adequate convergent validity compared to similar measures. A full illustration of the measures’ convergent validity can be found in the CCAPS 2012 Technical Manual (Center for Collegiate Mental Health, 2012).
Benefits for Counselors-in-Training
The benefits of FIT systems are multifaceted and can positively impact the growth and development of student counselors (Reese, Norsworthy, et al., 2009; Schmidt, 2014; Yates, 2012). Within counselor training laboratories, feedback systems have shown promise in facilitating the growth and development of beginning counselors (Reese, Usher, et al., 2009), and the incorporation of FIT systems into supervision and training experiences has been widely supported (Schmidt, 2014; Worthen & Lambert, 2007; Yates, 2012).
One such benefit is that counseling students’ self-efficacy improved when they saw evidence of their clients’ improvement (Reese, Usher, et al., 2009). A FIT system allows for the documentation of a client’s progress and when counseling students observed their clients making such progress, their self-efficacy improved regarding their skill and ability as counselors. Additionally, the FIT system allowed the counselor trainees to observe their effectiveness during session, and more importantly, helped them alter their interventions when clients deteriorated or plateaued during treatment. Counselor education practicum students who implemented a FIT system through client treatment reported that having weekly observations of their client’s progress helped them to isolate effective and non-effective techniques they had used during session (Yates, 2012). Additionally, practicum counseling students have indicated several components of FIT feedback forms were useful, including the visual orientation (e.g., graphs) to clients’ shifts in symptomology. This visual attenuation to client change allowed counselors-in-training to be more alert to how clients are actually faring in between sessions and how they could tailor their approach, particularly regarding crisis situations (Yates, 2012).
Another benefit discovered from the above study was that counseling students felt as if consistent use of a FIT system lowered their anxiety and relieved some uncertainty regarding their work with clients (Yates, 2012). It is developmentally appropriate for beginning counselors to struggle with low tolerance for ambiguity and the need for a highly structured learning environment when they begin their experiential practicums and internships (Bernard & Goodyear, 2013). The FIT system allows for a structured format to use within the counseling session that helps to ease new counselors’ anxiety and discomfort with ambiguity.
Additionally, by bringing the weekly feedback into counseling sessions, practicum students were able to clarify instances when the feedback was discrepant from how the client presented during session (Yates, 2012). This discrepancy between what the client reported on the measure and how they presented in session was often fertile ground for discussion. Counseling students believed bringing these discrepancies to a client’s attention deepened the therapeutic alliance because the counselor was taking time to fully understand the client (Yates, 2012).
Several positive benefits are added to the clinical supervision of counseling students. One such benefit is that clinical supervisors found weekly objective reports of their supervisees helpful in providing evidence of a client’s progress during session that was not solely based upon their supervisees’ self-report. This is crucial because relying on self-report as a sole method of supervision can be an insufficient way to gain information about the complexities of the therapeutic process (Bernard & Goodyear, 2013). Supervisors and practicum students both reported that the FIT system frequently brought to their attention potential concerns with clients that they had missed (Yates, 2012). A final benefit is that supervisees who utilized a FIT system during supervision had significantly higher satisfaction levels of supervision and stronger supervisory alliances than students who did not utilize a FIT system (Grossl, Reese, Norsworthy, & Hopkins, 2014; Reese, Usher, et al., 2009).
Benefits for Clients
Several benefits exist for counseling clients when FIT systems are utilized in the therapeutic process. The sharing of objective progress information with clients has been found to be perceived as helpful and a generally positive experience by clients (Martin, Hess, Ain, Nelson, & Locke, 2012). Surveying clients using a FIT system, Martin et al. (2012) found that 74.5% of clients found it “convenient” to complete the instrument during each session. Approximately 46% of the clients endorsed that they had a “somewhat positive” experience using the feedback system, while 20% of clients reported a “very positive” experience. Hawkins, Lambert, Vermeersch, Slade, and Tuttle (2004) found that providing feedback to both clients and counselors significantly increased the clients’ therapeutic improvement in the counseling process when compared to counselors who received feedback independently. A meta-analysis of several research studies, including Hawkins et al. (2004), found effect sizes of clinical efficacy related to providing per-session feedback ranged from 0.34 to 0.92 (Shimokawa et al., 2010). These investigations found more substantial improvement in clients whose counselors received consistent client feedback when compared with counselors who received no client feedback regarding the therapeutic process and symptomology. These data also showed that consistent feedback provision to clients resulted in an overall prevention of premature treatment termination (Lambert, 2010).
Utilization of FIT Systems for Counseling Curriculum and Student Learning Outcome Assessment
The formal assessment of graduate counseling student learning has increased over the past decade. The most recent update of the national standards from the Council for Accreditation of Counseling and Related Educational Programs (CACREP) included the requirement for all accredited programs to systematically track students at multiple points with multiple measures of student learning (CACREP, 2015, Section 4, A, B, C, D, E). Specifically, “counselor education programs conduct formative and summative evaluations of the student’s counseling performance and ability to integrate and apply knowledge throughout the practicum and internship” (CACREP, 2015, Section 4.E). The use of continuous client feedback within counselor education is one way to address such assessment requirements (Schmidt, 2014).
Counseling master’s programs impact students on both personal and professional levels (Warden & Benshoff, 2012), and part of this impact stems from ongoing and meaningful evaluation of student development. The development of counselors-in-training during experiential courses entails assessment of a myriad of counseling competencies (e.g., counseling microskills, case conceptualization, understanding of theory, ethical decision-making and ability to form a therapeutic relationship with clients; Haberstroh, Duffey, Marble, & Ivers, 2014). As per CACREP standards, counseling students will receive feedback during and after their practicum and internship experiences. This feedback typically comes from both the supervising counselor on site, as well as the academic department supervisor.
Additionally, “supervisors need to help their supervisees develop the ability to make effective decisions regarding the most appropriate clinical treatment” (Owen, Tao, & Rodolfa, 2005, p. 68). One suggested avenue for developing such skills is client feedback using FIT systems. The benefit of direct client feedback on the counseling process has been well documented (Minami et al., 2009), and this process can also be useful to student practice and training. Counseling students can greatly benefit from the use of client feedback throughout their training programs (Reese, Usher, et al., 2009). In this way, counselors-in-training learn to acknowledge client feedback as an important part of the counseling process, allowing them to adjust their practice to help each client on an individual basis. Allowing for a multi-layered feedback model wherein the counselor-in-training can receive feedback from the client, site supervisor and academic department supervisor has the potential to maximize student learning and growth.
Providing students feedback for growth through formal supervision is one of the hallmarks of counseling programs (Bernard & Goodyear, 2013). However, a more recent focus throughout higher education is the necessity of assessment of student learning outcomes (CACREP, 2015). This assessment can include “systematic evaluation of students’ academic, clinical, and interpersonal progress as guideposts for program improvement” (Haberstroh et al., 2014, p. 28). As such, evaluating student work within the experiential courses (e.g., practicum and internship) is becoming increasingly important.
FIT systems provide specific and detailed client feedback regarding clients’ experiences within therapy. Having access to documented client outcomes and progress throughout the counseling relationship can provide an additional layer of information regarding student growth and skill development. For instance, if a student consistently has clients who drop out or show no improvement over time, those outcomes could represent a problem or unaddressed issue for the counselor-in-training. Conversely, if a student has clients who report positive outcomes over time, that data could show clinical understanding and positive skill development.
Student learning outcomes can be assessed in a myriad of ways (e.g., FIT systems, supervisor evaluations, student self-assessment and exams; Haberstroh et al., 2014). Incorporating multiple layers of feedback for counseling students allows for maximization of learning through practicum and internships and offers a concrete way to document and measure student outcomes.
An Example: Case Study
Students grow and develop through a wide variety of methods, including feedback from professors, supervisors and clients (Bernard & Goodyear, 2013). Implementing a FIT system into experiential classes in counseling programs allows for the incorporation of structured, consistent and reliable feedback. We use a case example here to illustrate the benefits of such implementation. Within the case study, each CACREP Student Learning Outcome that is met through the implementation of the FIT system is documented.
A counselor educator is the instructor of an internship class where students have a variety of internship placements. This instructor decides to have students implement a FIT system that will allow them to track client progress and the strength of the working alliance. The OQ 45.2 and the SRS measures were chosen because they allow students to track client outcomes and the counseling relationship and are easy to administer, score and interpret. In the beginning of the semester, the instructor provides a syllabus to the students where the following expectations are listed: (1) students will have their clients fill out the OQ 45.2 and the SRS during every session with each client; (2) students will learn to discuss and process the results from the OQ 45.2 and SRS in each session with the client; and (3) students will bring all compiled information from the measures to weekly supervision. By incorporating two FIT systems and the subsequent requirements, the course is meeting over 10 CACREP (2015) learning outcome assessment components within Sections 2 and 3, Professional Counseling Identity (Counseling and Helping Relationships, Assessment and Testing), and Professional Practice.
A student, Sara, begins seeing a client at an outpatient mental health clinic who has been diagnosed with major depressive disorder; the client’s symptoms include suicidal ideation, anhedonia and extreme hopelessness. Sara’s initial response includes anxiety due to the fact that she has never worked with someone who has active suicidal ideation or such an extreme presentation of depressed affect. Sara’s supervisor spends time discussing how she will use the FIT systems in her work with the client and reminds her about the necessities of safety assessment.
In her initial sessions with her client, Sara incorporates the OQ 45.2 and the SRS into her sessions as discussed with her supervisor (CACREP Section 2.8.E; 2.8.K). However, after a few sessions, she does not yet feel confident in her work with this client. Sara feels constantly overwhelmed by the depth of her client’s depression and is worried about addressing the suicidal ideation. Her instructor is able to use the weekly OQ 45.2 and SRS forms as a consistent baseline and guide for her work with this client and to help Sara develop a treatment plan that is specifically tailored for her client based upon the client’s symptomology (CACREP Section 2.5.H, 2.8.L). Using the visual outputs and compiled graphs of weekly data, Sara is able to see small changes that may or may not be taking place for the client regarding his depressive symptoms and overall feelings and experiences in his life. Sara’s instructor guides her to discuss these changes with the client and explore in more detail the client’s experiences within these symptoms (CACREP Section 2.5.G). By using this data with the client, Sara will be better able to help the client develop appropriate and measureable goals and outcomes for the therapeutic process (CACREP Section 2.5.I). Additionally, as a new counselor, such an assessment tool provides Sara with structure and guidance as to the important topics to explore with clients throughout sessions. For example, by using some of the specific content on the OQ 45.2 (e.g., I have thoughts of ending my life, I feel no interest in things, I feel annoyed by people who criticize my drinking, and I feel worthless), she can train herself to assess for suicidal ideation and overall diagnostic criteria (CACREP Section 2.7.C).
Additionally, Sara is receiving feedback from the client by using the SRS measure within session. In using this additional FIT measure, Sara can begin to gauge her personal approach to counseling with this client and receive imperative feedback that will help her grow as a counselor (CACREP, Section 2.5.F). This avenue provides an active dialogue between client and counselor about the work they are doing together and if they are working on the pieces that are important to the client. Her instructor is able to provide both formative and summative feedback on her overall process with the client using his outcomes as a guide to her effectiveness as a clinician (CACREP, Section 3.C). Implementing a FIT system allows for the process of feedback provision to have concrete markers and structure, ultimately allowing for a student counselor to grow in his or her ability to become self-reflective about his or her own practice.
Implications for Counselor Education
The main implications of the integration of FIT systems into counselor education are threefold: (1) developmentally appropriate interventions to support supervisee/trainee clinical growth; (2) intentional measurement of CACREP Student Learning Outcomes; and (3) specific attention to client care and therapeutic outcomes. There are a variety of FIT systems being utilized, and while they vary in scope, length, and targets of assessment, each has a brief administration time and can be repeated frequently for current client status and treatment outcome measurement. With intentionality and dedication, counselor education programs can work to implement the utilization of these types of assessment throughout counselor trainee coursework (Schmidt, 2014).
FIT systems lend themselves to positive benefits for training competent emerging counselors. Evaluating a beginning counselor’s clinical understanding and skills are a key component of assessing overall learning outcomes. When counselors-in-training receive frequent feedback on their clients’ current functioning or session outcomes, they are given the opportunity to bring concrete information to supervision, decide on treatment modifications as indicated, and openly discuss the report with clients as part of treatment. Gathering data on a client’s experience in treatment brings valuable information to the training process. Indications of challenges or strengths with regard to facilitating a therapeutic relationship can be addressed and positive change supported through supervision and skill development. Additionally, by learning the process of ongoing assessment and therapeutic process management, counselor trainees are meeting many of the CACREP Student Learning Outcomes. The integration of FIT systems into client care supports a wide variety of clinical skill sets such as understanding of clinical assessment, managing a therapeutic relationship and treatment planning/altering based on client needs.
Finally, therapy clients also benefit through the use of FIT. Clinicians who receive weekly feedback on per-session client progress consistently show improved effectiveness and have clients who prematurely terminate counseling less often (Lambert, 2010; Shimokawa et al., 2010). In addition to client and counselor benefit, supervisors also have been shown to utilize FIT systems to their advantage. One of the most important responsibilities of a clinical supervisor is to manage and maintain a high level of client care (Bernard & Goodyear, 2013). Incorporation of a structured, validated assessment, such as a FIT system, allows for intentional oversight of the client–counselor relationship and clinical process that is taking place between supervisees and their clients. Overall, the integration of FIT systems into counselor education would provide programs with a myriad of benefits including the ability to meet student, client and educator needs simultaneously.
Conclusion
FIT systems provide initial and ongoing data related to a client’s psychological and behavioral functioning across a variety of concerns. They have been developed and used as a continual assessment procedure to provide a frequent and continuous self-report by clients. FIT systems have been used effectively to provide vital mental health information within a counseling session. The unique features of FIT systems include the potential for recurrent, routine measure of a client’s symptomatology, easily accessible and usable data for counselor and client, and assistance in setting benchmarks and altering treatment strategies to improve a client’s functioning. With intentionality, counselor educator programs can use FIT systems to meet multiple needs across their curriculums including more advanced supervision practices, CACREP Student Learning Outcome Measurement, and better overall client care.
Conflict of Interest and Funding Disclosure
The author reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
Bernard, J. M., & Goodyear, R. K. (2013). Fundamentals of clinical supervision (5th ed.). Boston, MA: Merrill.
Center for Collegiate Mental Health. (2012). CCAPS 2012 technical manual. University Park: Pennsylvania State
University.
The Council for Accreditation of Counseling Related Academic Programs (CACREP). (2015). 2016 accreditation standards. Retrieved from http://www.cacrep.org/for-programs/2016-cacrep-standards
Derogatis, L. R. (1983). The SCL-90: Administration, scoring, and procedures for the SCL-90. Baltimore, MD: Clinical
Psychometric Research.
Duncan, B. L., Miller, S. D., Sparks, J. A., Claud, D. A., Reynolds, L. R., Brown, J., & Johnson, L. D. (2003). The Session Rating Scale: Preliminary psychometric properties of a “working” alliance measure. Journal of Brief Therapy, 3, 3–12.
Grossl, A. B., Reese, R. J., Norsworthy, L. A., & Hopkins, N. B. (2014). Client feedback data in supervision: Effects on supervision and outcome. Training and Education in Professional Psychology, 8, 182–188.
Haberstroh, S., Duffey, T., Marble, E., & Ivers, N. N. (2014). Assessing student-learning outcomes within a counselor education program: Philosophy, policy, and praxis. Counseling Outcome Research and Evaluation, 5, 28–38. doi:10.1177/2150137814527756
Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart, D. W., Shimokawa, K., & Sutton, S. W. (2005). A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology, 61, 155–163.
Hatfield, D., & Ogles, B. M. (2004). The use of outcome measures by psychologists in clinical practice.
Professional Psychology: Research & Practice, 35, 485–491. doi:10.1037/0735-7028.35.5.485
Hawkins, E. J., Lambert, M. J., Vermeersch, D. A., Slade, K. L., & Tuttle, K. C. (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research, 14, 308–327. doi:10.1093/ptr/kph027
Lambert, M. J. (2010). Prevention of treatment failure: The use of measuring, monitoring, & feedback in clinical practice.
Washington, DC: American Psychological Association.
Lambert, M. J., Hansen, N. B., & Finch, A. E. (2001). Patient-focused research: Using patient outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology, 69, 159–172.
Lambert, M. J., Hansen, N. B., & Harmon, S. C. (2010). Outcome Questionnaire system (The OQ system): Development and practical applications in healthcare settings. In M. Barkham, G. Hardy, & J. Mellor-Clark (Eds.), Developing and delivering practice-based evidence: A guide for the psychological therapies (pp. 141–154). New York, NY: Wiley-Blackwell.
Lambert, M. J., Hansen, N. B., Umphress, V., Lunnen, K., Okiishi, J., Burlingame, G. M., & Reisinger, C. (1996). Administration and scoring manual for the OQ 45.2. Stevenson, MD: American Professional Credentialing Services.
Locke, B. D., Buzolitz, J. S., Lei, P. W., Boswell, J. F., McAleavey, A. A., Sevig, T. D., Dowis, J. D. & Hayes, J.
(2011). Development of the Counseling Center Assessment of Psychological Symptoms-62 (CCAPS-62).
Journal of Counseling Psychology, 58, 97–109.
Locke, B. D., McAleavey, A. A., Zhao, Y., Lei, P., Hayes, J. A., Castonguay, L. G., Li, H., Tate, R., & Lin, Y. (2012). Development and initial validation of the Counseling Center Assessment of Psychological Symptoms-34 (CCAPS-34). Measurement and Evaluation in Counseling and Development, 45, 151–169. doi:10.1177/0748175611432642
Luborsky, L., Barber, J. P., Siqueland, L., Johnson, S., Najavits, L. M., Frank, A., & Daley, D. (1996). The Helping
Alliance Questionnaire (HAQ–II): Psychometric properties. The Journal of Psychotherapy Practice and
Research, 5, 260–271.
Martin, J. L., Hess, T. R., Ain, S. C., Nelson, D. L., & Locke, B. D. (2012). Collecting multidimensional client data using repeated measures: Experiences of clients and counselors using the CCAPS-34. Journal of College Counseling, 15, 247–261. doi:10.1002/j.2161-1882.2012.00019.x
Miller, S., & Duncan, B. (2000). The outcome rating scale. Chicago, IL: International Center for Clinical Excellence.
Miller, S., Duncan, B., & Johnson, L. (2000). The session rating scale. Chicago, IL: International Center for Clinical
Excellence.
Miller, S. D., Duncan, B. L., Brown, J., Sparks, J. A., & Claud, D. A. (2003). The Outcome Rating Scale: A
preliminary study of the reliability, validity, and feasibility of a brief visual analog measure. Journal of
Brief Therapy, 2, 91–100.
Minami, T., Davies, D. R., Tierney, S. C., Bettmann, J. E., McAward, S. M., Averill, L. A., & Wampold, B. E. (2009). Preliminary evidence on the effectiveness of psychological treatments delivered at a university counseling center. Journal of Counseling Psychology, 56, 309–320.
Owen, J., Tao, K. W., & Rodolfa, E. R. (2005). Supervising counseling center trainees in the era of evidence-based practice. Journal of College Student Psychotherapy, 20, 66–77.
Reese, R. J., Norsworthy, L. A., & Rowlands, S. R. (2009). Does a continuous feedback system improve psychotherapy outcome? Psychotherapy: Theory, Research, Practice, Training, 46, 418–431.
doi:10.1037/a0017901
Reese, R. J., Usher, E. L., Bowman, D. C., Norsworthy, L. A., Halstead, J. L., Rowlands, S. R., & Chisolm, R.
R. (2009). Using client feedback in psychotherapy training: An analysis of its influence on supervision
and counselor self-efficacy. Training and Education in Professional Psychology, 3, 157–168.
doi:10.1037/a0015673
Schmidt, C. D. (2014). Integrating continuous client feedback into counselor education. The Journal of Counselor Preparation and Supervision, 6, 60–71. doi:10.7729/62.1094
Shimokawa, K., Lambert, M. J., & Smart, D. W. (2010). Enhancing treatment outcome of patients at risk of treatment failure: Meta-analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting and Clinical Psychology, 78, 298–311. doi:10.1037/a0019247
Warden, S. P., & Benshoff, J. M. (2012). Testing the engagement theory of program quality in CACREP-accredited counselor education programs. Counselor Education and Supervision, 51, 127–140.
doi:10.1002/j.1556-6978.2012.00009.x
Worthen, V. E., & Lambert, M. J. (2007). Outcome oriented supervision: Advantages of adding systematic
client tracking to supportive consultations. Counselling & Psychotherapy Research, 7, 48 –53.
doi:10.1080/14733140601140873
Yates, C. M. (2012). The use of per session clinical assessment with clients in a mental health delivery system: An
investigation into how clinical mental health counseling practicum students and practicum instructors use
routine client progress feedback (Unpublished doctoral dissertation). Kent State University, Kent, Ohio.
Chad M. Yates is an Assistant Professor at Idaho State University. Courtney M. Holmes, NCC, is an Assistant Professor at Virginia Commonwealth University. Jane C. Coe Smith is an Assistant Professor at Idaho State University. Tiffany Nielson is an Assistant Professor at the University of Illinois at Springfield. Correspondence can be addressed to Chad M. Yates, 921 South 8th Ave, Stop 8120, Pocatello, Idaho, 83201, yatechad@isu.edu.
Feb 6, 2015 | Article, Volume 5 - Issue 1
Patrick R. Mullen, Olivia Uwamahoro, Ashley J. Blount, Glenn W. Lambie
Counselor preparation is multifaceted and involves developing trainees’ clinical knowledge, skills and competence. Furthermore, counselor self-efficacy is a relevant developmental consideration in the counseling field. Therefore, the purpose of this longitudinal investigation was to examine the effects of a counselor preparation program on students’ development of counseling self-efficacy. The Counselor Self-Efficacy Scale was administered to 179 master’s-level counselors-in-training at three points in their counselor training and coursework, including new student orientation, clinical practicum orientation and final internship group supervision meeting. Findings indicated that students’ experience in their preparation program resulted in higher levels of self-efficacy.
Keywords: counselor preparation, counselor training, self-efficacy, development, internship
The practice of counselor training is a complex, intentional process of reflective educational and experiential activities to promote the development of knowledge and skills (Bernard & Goodyear, 2013; Council for Accreditation of Counseling and Related Educational Programs [CACREP], 2009; McAuliffe & Eriksen, 2011). As such, the primary goal of counselor preparation programs is to educate and train students to become competent counselors by equipping them with necessary skills, knowledge and experiences (American Counseling Association, 2014; Bernard & Goodyear, 2013; CACREP, 2009). Furthermore, students training to be counselors increase their self-awareness and reflective practice throughout their educational experience (Granello & Young, 2012; Lambie & Sias, 2009; Rønnestad, & Skovholt, 2003). Increased understanding regarding counseling trainee development may aid educators’ ability to develop and deliver educational and supervision interventions.
Self-efficacy represents an individual’s beliefs or judgments about his or her ability to accomplish a given goal or task (Bandura, 1995). Furthermore, self-efficacy is a recognized measure of development in the counseling field (Larson & Daniels, 1998), has a positive influence on work-related performance (Bandura, 1982; Stajkovic & Luthans, 1998), and consequently works as an outcome and developmental consideration for counselor training. In addition, there are assortments of published research examining counseling trainees’ self-efficacy (e.g., Barbee, Scherer & Combs, 2003; Cashwell & Dooley, 2001; Kozina, Grabovari, Stefano, & Drapeau, 2010; Melchert, Hays, Wiljanen, & Kolocek, 1996; Tang et al., 2004); however, limited research examines counseling trainees’ development of self-efficacy in a longitudinal fashion based upon their experiences from start (e.g., educational courses) to finish (e.g., initial clinical experiences) in counselor preparation programs. Therefore, the purpose of this longitudinal investigation was to examine counselor trainees’ self-efficacy as they progressed through the educational and experiential components of a counselor preparation program.
Counseling Students’ Self-Efficacy
Bandura (1995) described perceived self-efficacy as “beliefs in one’s capabilities to organize and execute the courses of action required to manage prospective situations” (p. 2). Self-efficacy is considered an appropriate scientific lens for examining individuals’ beliefs regarding their ability to accomplish professional goals (Bandura, 1997) and is a common research topic in counseling literature (e.g., Larson & Daniels, 1998). Specifically, Bandura (1997) suggested that individuals’ ability to accomplish a task or goal not only necessitates skill and ability, but also the belief in oneself that provides the confidence and motivation to complete a task. Larson and Daniels (1998) stated that counseling self-efficacy is “one’s beliefs or judgments about her or his capabilities to effectively counsel a client in the near future” (p. 180). Self-efficacy is appropriate for the selection and training of counselors because of the construct’s stability and reliability (Beutler, Machado, & Neufeldt, 1994).
Self-efficacy is important in relation to counselor competence (Barnes, 2004; Larson & Daniels, 1998). Larson (1998) suggested that self-efficacy is a critical influence on one’s self-determining mechanisms and as a result is a critical variable in supervision. The importance of self-efficacy in the counseling field is documented by the development of measures of self-efficacy for various research constructs (e.g., Bodenhorn & Skaggs, 2005; Mullen, Lambie, & Conley, 2014; Sutton & Fall, 1995). Melchert and colleagues (1996) developed the Counselor Self-Efficacy Scale (CSES) to examine counselors’ and counselor trainees’ level of confidence in knowledge and skills regarding counseling competencies. Melchert and colleagues (1996) found that counseling students’ (N = 138) scores on the CSES varied based on their experience in their preparation program, with second-year students reporting more confidence than students in their first year of training. Additionally, Melchert and colleagues (1996) found that counselors (N = 138) with more years of clinical experience also reported greater levels of self-efficacy.
Counselors’ training, initial clinical experiences and supervision relates to their self-efficacy beliefs. Hill et al., (2008) found that skills training impacted undergraduate students’ confidence regarding the use of helping skills. However, Hill and colleagues (2008) noted that as students faced more difficult skills, their confidence decreased, but eventually increased upon gaining experience using the skill. Barbee and associates (2003) found that trainees’ (N = 113) participation in service learning had a positive relationship with counselor self-efficacy. However, these researchers also found that total credits of coursework (i.e., time in the preparation program) and prior counseling-related work were stronger predictors of self-efficacy as compared to service learning.
Supporting the findings from Barbee and colleagues (2003), Tang and colleagues (2004) found that students with more coursework, internship experience and related work experience reported higher levels of competence regarding counseling skills. Regarding self-efficacy during clinical experiences, Kozina and colleagues (2010) found that the counseling self-efficacy of first year master’s-level counseling students increased during initial work with clients during clinical experience. Additionally, Cashwell and Dooley (2001) found that practicing counselors receiving supervision, compared to those not receiving supervision, reported higher levels of self-efficacy, indicating that supervision supports increased beliefs of counseling efficacy. However, no published studies were identified examining counseling students’ longitudinal change in self-efficacy as a result of their participation in a counselor preparation program from the start of the program through their clinical experiences.
Purpose of the Study
The development of trainees is a vital topic for counselor education. Counselor educators and supervisors need a comprehensive understanding of student development with the aim of assessing student learning outcomes and facilitating pedagogical and supervisory interventions that support development. Enhancing counseling students’ self-efficacy regarding clinical skills is an important developmental goal within preparation programs, with higher self-efficacy suggesting increased likelihood of efficient and effective counseling services (Bandura, 1982; Bandura, 1997; Larson & Daniels, 1998; Stajkovic & Luthans, 1998). Research on counselor self-efficacy is common; however, no studies have investigated change in master’s-level counseling students’ self-efficacy over the course of their preparation program (i.e., longitudinal investigation). Therefore, we investigated the following research questions: (1) What is the relationship between counseling students’ demographic factors and self-efficacy at three key times during their preparation program? (2) Does counseling students’ self-efficacy change at three points during their graduate preparation program?
Method
Participants and Procedures
Participants included 179 master’s-level graduate students from a single CACREP entry-level counselor education program at a university in the Southeastern United States. Specifically, participants included several cohorts of entry-level counselor trainees who started the counselor training program during the spring 2008 through fall 2011 semesters and completed the program by the Summer 2013 semester. Institutional Review Board approval from the university was obtained prior to data collection and analysis. To protect the rights and confidentiality of the participants, all identifying information was removed and the data were aggregated.
The study was introduced to the participants during the counselor preparation program’s new student orientation (NSO; a mandatory information session prior to the start of trainees’ coursework). At this point, students were invited to be part of the study by completing a paper-and-pencil packet of instrumentation. Participants were invited to complete the second data collection point during a mandatory clinical practicum orientation (CPO) occurring prior to their initial clinical and supervision experience (approximately midpoint during the students’ program of study). The final data collection point was at the participants’ final internship group supervision meeting (FIGSM; end of students’ program of study). A total accessible sample consisted of 224 students who fit the selection criteria for participate in this study. The selection criteria included the following: (a) started the program in the beginning of the spring 2008 semester and (b) graduated by the end of the fall 2011 semester. However, due to incomplete instrument packets, missing items (listwise deletion) or student attrition, 179 participants completed the instruments across all three data collection points, yielding a 79.91% response rate.
The participants included 151 females (84.4%) and 28 males (15.6%). Regarding age, 162 participants (90.5%) fell between the ages of 20 and 29, 13 participants (7.3%) were between the ages of 30 and 39, two participants (1.1%) fell between the ages of 40 and 49, and two participants (1.1%) were over 50 years of age. Participants’ ethnicities were as follows: 133 (74.3%) Caucasian, 36 (20.1%) African American, seven (3.9%) Hispanic American, one (0.6%) Asian American and 2 (1.1%) other ethnicity. Participants program tracks included mental health counseling (MHC; n = 78, 43.6%); marriage, couples and family counseling (MCFC; n = 46, 25.7%); and school counseling (SC; n = 55, 30.7%).
Counselor Preparation Program Experience
Students participating in this study were entry-level counseling trainees attending an academic unit with three CACREP-accredited master’s-level programs. The students were enrolled in one of the following three programs of study: (a) MHC; (b) MCFC; or (c) SC. Students’ early coursework in the counselor preparation program included core curriculum courses that focused on content knowledge and initial skill development required for advanced clinical courses. The course prerequisites for initial clinical practicum experience for all students included: (a) Introduction to the Counseling Profession, (b) Theories of Counseling and Personality, (c) Techniques of Counseling, (d) Group Procedures and Theories in Counseling, and (e) Ethical and Legal Issues. Additionally, students in the MHC and MCFC tracks were required to complete a Diagnosis and Treatment in Counseling course. Students in the MHC and MCFC tracks were required to complete 63 credit hours, while students in the SC track were required to complete 60 credits hours (if they did not have a teaching certificate) or 51 credit hours (if they had a valid teaching certificate). Courses were delivered by a diverse set of counselor educators who determined course content and style based on their individual pedagogical approaches.
Students participated in their clinical practicum course after their course prerequisites were met. SC students completed their internship after a single semester of clinical practicum (100 total clinical hours in practicum). Students in MHC and MCFC tracks completed their internship experience after two consecutive experiences in clinical practicum (200 total clinical hours in practicum). During their internship experience, SC students completed 600 clinical hours over one or two semesters and MHC and MCFC students completed 900 clinical hours over two semesters. Overall, students progressed through their course and clinical experiences over 2.5–3.5years, depending on their course load and time commitment preferences. Importantly, it was not required for all coursework to be completed prior to initial clinical experiences. Students completed non-prerequisite coursework at the time most accommodating to their schedule, but were required to complete all coursework by the time of graduation, with the FIGSM being one of the last class-based tasks in the program.
Measures
We utilized the CSES (Melchert et al., 1996) in this investigation to gather data on counseling trainees’ level of self-efficacy. In addition, a demographic questionnaire was used to collect data regarding participants’ biological gender, age, ethnicity and program track (i.e., MHC, MCFC or SC). The following section introduces and reviews the CSES.
Counselor Self-Efficacy Scale. The CSES is a 20-item self-report instrument that assesses counseling trainees’ competency regarding key counseling tasks for group and individual counseling (Melchert et al., 1996). The CSES was developed based upon a review of the literature with the goal of identifying key types of counseling competencies for counselors. The CSES uses 5-point Likert scale responses that indicate an individual’s level of confidence in his or her counseling ability, including “Never,” “Rarely,” “Sometimes,” “Frequently” or “Almost Always” answer options. Half of the items are worded in a negative fashion to avoid acquiescent response bias, requiring reverse coding. The total score of the CSES ranges from 20–100 and is calculated by adding the responses to all 20 items with consideration given to the reverse coded items. Some sample items from the CSES include the following: (a) I am not able to accurately identify client affect, (b) I can effectively facilitate appropriate goal development with clients, and (c) I can function effectively as a group leader/facilitator.
Melchert and colleagues (1996) reported a Cronbach’s alpha of .91 and a test-retest reliability (r = .85; p-value not reported) in their initial psychometric testing of the CSES with counseling psychologist students and licensed professional psychologists. In addition, Melchert and colleagues (1996) tested for convergent validity and reported an acceptable correlation (r = .83; p-value not reported) between the CSES and the Self-Efficacy Inventory (Friedlander & Snyder, 1983). Constantine (2001) found that the CSES had an acceptable internal consistency, with a Cronbach’s alpha of .77 with counseling supervisees. Additionally, Pasquariello (2013) found that Cronbach’s alpha ranged from .85–.93 with doctoral psychology students. For the current study, the internal consistency reliability for the CSES was acceptable, with a Cronbach’s alpha of .96 (Sink & Stroh, 2006; Streiner, 2003).
Data Analysis
A longitudinal study design was employed for this investigation. After completion of the data collection process, participants’ responses were analyzed using descriptive data analysis, one-way analysis of variance (ANOVA), repeated measures ANOVA, paired-samples t-test and mixed between/within-subjects ANOVA. Prior to analysis, the data were screened for outliers using the outlier labeling method (Hoaglin & Iglewicz, 1987; Hoaglin, Iglewicz, & Tukey, 1986), which resulted in identifying 11 cases with outliers. Therefore, Windsorized means were calculated based on adjacent data points to replace the outliers (Barnett & Lewis, 1994; Osborne & Overbay, 2004). The resulting data were checked for statistical assumptions and no violations were found. A sample size of 179 graduate counseling students was deemed appropriate for identifying a medium effect size (power = .80) at the .01 level for the employed data analysis procedures (Cohen, 1992).
Results
Counseling Trainees’ Self-Efficacy
Several one-way between-groups ANOVAs were conducted to examine the impact of each trainee’s age, gender, ethnicity and program track (i.e., SC, MHC or MCFC) on his or her level of self-efficacy at each of the three data collection points. There was no statistically significant relationship between self-efficacy and trainees’ age at the NSO data collection point (F[3, 178] = 1.35, p = .26), at the CPO data collection point (F[3, 178] = .39, p = .76) or at the FIGSM data collection point (F[3, 178] = .71, p = .55). Similarly, there was no statistically significant relationship between self-efficacy and trainees’ gender at the NSO data collection point (F[1, 178] = .48, p = .49), at the CPO data collection point (F[1, 178] = .02, p = .88) or at the FIGSM data collection point (F[1, 178] = .001, p = .97). There was no statistically significant relationship between self-efficacy and trainees’ ethnicity at the NSO data collection point (F[4, 178] = 1.03, p = .39), at the CPO data collection point (F[4, 178] = .82, p = .51) or at the FIGSM data collection point (F[4, 178] = .03, p = .97). Finally, there was no statistically significant relationship between self-efficacy and trainees’ program track at the NSO data collection point (F[2, 178] = .03, p = .97), at the CPO data collection point (F[2, 178] = .40, p = .67) or at the FIGSM data collection point (F[2, 178] = .04, p = .96).
Counseling Trainees’ Self-Efficacy Over the Course of the Program
A one-way within-subjects repeated measures ANOVA was conducted to examine participants’ (N = 179) CSES scores at the three data points (i.e., NSO, CPO, FIGSM). Table 1 presents the descriptive statistics. Mauchley’s Test indicated that the assumption of sphericity was violated, χ2(2) = .53, p < .001; therefore, the within-subjects effects were analyzed using the Greenhouse-Geisser correction (Greenhouse & Geisser, 1959). There was a statistically significant effect of time, F(1.3, 242.79)= 404.52, p < .001, Partial η2 = .69 on participants’ CSES scores. Sixty-nine percent of the variance in CSES scores can be accounted for by the time participants spent in the program (large effect size; Sink & Stroh, 2006; Streiner, 2003). Therefore, trainees scored higher on the CSES at each interval during their counselor preparation program.
Table 1
Descriptive Statistics for Self-Efficacy Across Data Collection Points
Data Collection Point
|
M
|
SD
|
Mdn
|
Mode
|
Range
|
New student orientation |
57.09
|
14.42
|
59
|
58
|
23–84 (61)
|
Clinical practicum orientation |
77.43
|
8.53
|
78
|
79
|
53–99 (46)
|
Final internship group supervision meeting |
83.04
|
6.80
|
84
|
76
|
66–95 (33)
|
Note. N = 179. |
|
|
|
|
|
Several paired-samples t-tests were employed to evaluate the impact of time in the program on trainees’ self-efficacy. There was a statistically significant increase in trainees’ CSES scores from NSO to CPO, t (178) = 18.41, p < .001; η2 = .65. The mean increase in CSES scores between NSO and CPO was 20.33, with a 95% confidence interval ranging from 18.15–22.51. There was a statistically significant increase in trainees’ CSES scores from NSO to FIGSM, t (178) = 23.19, p < .001; η2 = .75. The mean increase in CSES scores between NSO and FIGSM was 25.94, with a 95% confidence interval ranging from 23.74–28.15. There was a statistically significant increase in trainees’ CSES scores from CPO to FIGSM, t (178) = 10.37, p < .001; η2 = .38. The mean increase in CSES scores between CPO and FIGSM was 5.61, with a 95% confidence interval ranging from 4.54–6.68. Overall, these results provide additional support indicating that trainees’ CSES scores had a statistically significant increase from the start of the program (NSO) to the end of the program (FIGSM). In addition, the span from the start of the program (NSO) to their initial clinical experience (CPO; i.e., completion of the core curriculum required for clinical work) had the largest increase in scores amongst consecutive time ranges (i.e., NSO to CPO and CPO to FIGSM).
A mixed between/within-subjects (split plot) ANOVA was conducted to assess the interaction effect of trainees’ degree track (i.e., SC; MHC; and MCFC) on their CSES scores across the three data points (i.e., NSO, CPO, FIGSM). Mauchley’s Test indicated that the assumption of sphericity was violated, χ2(2) = .53, p < .001; therefore, the effects were analyzed using the Greenhouse-Geisser correction (Greenhouse & Geisser, 1959). There was no significant interaction between trainees’ degree track and the data collection points, F(2.72, 239.58)= .12, p = .94; indicating that trainees’ track did not have an effect on their CSES scores across the data collection points, despite the differences in their program requirements.
Discussion
We examined the relationship between entry-level counseling trainees’ demographic characteristics and their reported self-efficacy at three key points during their graduate preparation program. The findings from this investigation indicated no relationship between participants’ age, gender, ethnicity or program track and their reported self-efficacy at any point in the program. These results are similar to Tang and colleagues’ (2004) findings, which identified no relationship between counseling trainees’ self-efficacy and their age. However, Tang and colleagues (2004) did find that total coursework and internship hours completed had a statistically significant impact on trainees’ counseling self-efficacy.
The current investigation is unique in that it longitudinally studied master’s-level counseling trainees’ self-efficacy at developmental points from the beginning to the end of their preparation program, while other studies have examined the construct of counseling self-efficacy through a cross-sectional framework or focused on clinical experiences (e.g., Barbee at al., 2003; Cashwell & Dooley, 2001; Kozina et al., 2010; Melchert et al., 1996; Tang et al., 2004). The results of this investigation identified differences in trainees’ self-efficacy at the three collection points (large effect size), indicating that trainees had an increase in self-efficacy as a result of their participation in the program. Additionally, the results identified mean differences in trainees’ self-efficacy as a result of time in the program from NSO to CPO and CPO to FIGSM. These findings are logical given the theoretical framework of self-efficacy (Bandura, 1986); however, these findings are important and relevant as they provide innovative empirical evidence for Bandura’s (1986) theory of self-efficacy.
Trainees’ self-efficacy increased the most between NSO and CPO, indicating that completing initial prerequisite content coursework had a larger impact on trainees’ development of efficacy compared to their time spent on initial clinical experience. This finding is important, considering that prior research has shown that initial clinical work increases self-efficacy (Kozina et al., 2010), whereas the findings in this investigation indicate that the majority of efficacy is developed prior to initial clinical experiences. The present results are consistent with those of Tang and colleagues (2004), who found that trainees with more completed coursework and more completed internship hours reported higher levels of self-efficacy. The findings of the current study builds upon Tang and colleagues’ (2004) findings, identifying the specific time within a counseling preparation program (i.e., initial coursework versus clinical experience) when the most growth in efficacy belief occurs.
The findings from the present investigation support models of education and supervision that utilize a social cognitive framework (e.g., Larson, 1998). Counselor self-efficacy represents a practitioner’s judgment about his or her ability to effectively counsel a client (Larson et al., 1992). Therefore, knowledge regarding counseling trainees’ development of self-efficacy during their preparation program prior to their clinical experiences affords supervisor practitioners and researchers insight into student development. Much of the existing literature focuses on trainees’ initial clinical experiences, neglecting the large impact that early coursework has on the development of self-efficacy.
Implications for Counselor Education and Supervision
We offer several implications for clinical supervisors based on the results from this investigation. First, our findings demonstrate that master’s-level counseling trainees’ self-efficacy increases as a result of their experiences in their preparation program, providing further evidence for Bandura’s (1986) theory of self-efficacy. Counselor educators are expected to monitor trainees’ progress and development throughout their training (Bernard & Goodyear, 2013), and self-efficacy is an established measure of development (Larson & Daniels, 1998); therefore, it serves as an appropriate outcome consideration for counselor preparation programs. Counselor educators can make use of available self-efficacy measures that focus on competency (e.g., CSES; Melchert et al., 1996) and evaluate trainees at milestones in their program as a measure of student learning outcomes. It is logical that trainees entering counselor preparation programs need high levels of instruction, modeling and guidance due to their inexperience in the discipline. Opportunities for modeling counseling skills across topic areas, along with occasions for practicing skills, provide chances for trainees to build mastery experiences early in their program. As noted by Kozina and colleagues (2010), giving feedback on the discrepancy between trainees’ skill competency and perceived efficacy may promote reflection and development at key times throughout their training program (Daniels & Larson, 2001; Hoffman, Hill, Holmes, & Freitas, 2005).
In addition, our findings identified the importance of trainees’ counselor preparation coursework. Specifically, increased student course requirements to meet accreditation standards (e.g., Bobby, 2013; CACREP, 2009; Hagedorn, Culbreth, & Cashwell, 2012) are likely to improve trainees’ self-efficacy (Tang et al., 2004). Prior research indicates that increased coursework as a result of higher accreditation standards has an effect on counselor knowledge (Adams, 2006). Our findings build on existing literature by indicating that coursework has an impact on trainees’ self-efficacy prior to their initial clinical experiences. Counselor educators should be strategic and identify prerequisite courses to enhance students’ self-efficacy on vital topics (e.g., counseling skills, group counseling, diagnosis and treatment courses) prior to students’ initial work with clients.
An additional implication relates to trainees’ level of self-efficacy as they enter initial clinical experiences. Participants in this study entered practicum with high levels of self-efficacy regarding clinical competence; and furthermore, participants had low to moderate increases in self-efficacy between practicum and the end of their internship. As such, our findings challenge the notion that growth in self-efficacy occurs during the clinical work phase of preparation (e.g., Kozina et al., 2010), because the majority of growth in self-efficacy for this study’s participants occurred prior to initial clinical experiences. On the other hand, participants’ reports of self-efficacy due to coursework may have been inflated, given that they had yet to complete their clinical work. Therefore, counselor educators should examine supervisees during their initial clinical work to assess their perceived efficacy and actual competence.
Limitations
As with all research, the present study has limitations. First, this study took place at a single counseling preparation program whose individual systemic factors may have influenced the participants’ experiences. Therefore, future studies should replicate the current investigation to confirm these findings. Second, this study utilized a single instrument that we identified based upon the research objectives for the study; however, more recently developed or validated instruments or a collection of instruments measuring the same construct may produce results that have different findings or implications. Additional limitations include the following: (a) potential unknown/unseen extraneous variables, (b) practice effects of participants retaking the same instruments three times, (c) participant attrition (i.e., 79.91% response rate), (d) cross-generational differences and (e) test fatigue (Gall, Gall, & Borg, 2007). Nevertheless, longitudinal research is considered a complex and comprehensive method of examining individual participants’ change over time (Gall et al., 2007), offering a contribution to the counselor education and supervision literature.
Recommendations for Future Research
Future research might expand this study to examine changes in postgraduate practitioners’ self-efficacy over an extended period of time (longitudinal study). Additionally, future researchers may examine: (a) the impact of self-efficacy on clinical outcomes, (b) the impact of clinical supervision on trainees’ self-efficacy and (c) the impact of initial clinical experiences (e.g., practicum) on trainees’ self-efficacy. Furthermore, researchers may examine other factors associated with counselor development (e.g., emotional intelligence, application of knowledge and theory, cognitive complexity). Researchers may examine the impact of specific pedagogical interventions on counseling trainees’ self-efficacy. Lastly, the findings from this study should be replicated in other institutes that train counseling professionals.
Counselor educators and supervisors promote counseling trainees’ professional competencies, enhancing their ability to provide effective counseling services to diverse clients. Research on counseling trainees’ development is imperative for understanding and attending to their counseling students’ educational and supervisory needs. The findings from this study indicate that counseling trainees experience an increase in their self-efficacy during their preparation programs.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of
interest or funding contributions for
the development of this manuscript.
References
Adams, S. A. (2006). Does CACREP accreditation make a difference? A look at NCE results and answers. Journal of Professional Counseling: Practice, Theory, & Research, 34, 60–76.
American Counseling Association. (2014). 2014 ACA Code of Ethics. Alexandria, VA: Author.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37, 122–147. doi:10.1037/0003-066X.37.2.122
Bandura, A. (1986). The explanatory and predictive scope of self-efficacy theory. Journal of Social and Clinical Psychology, 4, 359–373.
Bandura, A. (Ed.). (1995). Self-efficacy in changing societies. Cambridge, England: Cambridge University Press.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
Barbee, P. W., Scherer, D., & Combs, D. C. (2003). Prepracticum service-learning: Examining the relationship with counselor self-efficacy and anxiety. Counselor Education and Supervision, 43, 108–119. doi:10.1002/j.1556-6978.2003.tb01835.x
Barnes, K. L. (2004). Applying self-efficacy theory to counselor training and supervision: A comparison of two approaches. Counselor Education and Supervision, 44, 56–69. doi:10.1002/j.1556-6978.2004.tb01860.x
Barnett, V., & Lewis, T. (1994). Outliers in statistical data (3rd ed.). Chichester, England: Wiley & Sons.
Bernard, J. M., & Goodyear, R. K. (2013). Fundamentals of clinical supervision (5th ed.). Upper Saddle River, NJ: Pearson.
Beutler, L. E., Machado, P. P. P., & Neufeldt, S. A. (1994). Therapist variables. In A. E. Bergin & S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 229–269). New York, NY: Wiley.
Bobby, C. L. (2013). The evolution of specialties in the CACREP standards: CACREP’s role in unifying the profession. Journal of Counseling & Development, 91, 35–43. doi:10.1002/j.1556-6676.2013.00068.x
Bodenhorn, N., & Skaggs, G. (2005). Development of the school counselor self-efficacy scale. Measurement and Evaluation in Counseling and Development, 38, 14–28.
Cashwell, T. H., & Dooley, K. (2001). The impact of supervision on counselor self-efficacy. The Clinical Supervisor, 20, 39–47. doi:10.1300/J001v20n01_03
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159. doi:10.1037/0033-2909.112.1.155
Constantine, M. G. (2001). The relationship between general counseling self-efficacy and self-perceived multicultural counseling competence in supervisees. The Clinical Supervisor, 20, 81–90. doi:10.1300/J001v20n02_07
Council for Accreditation of Counseling and Related Educational Programs. (2009). 2009 standards. Retrieved from http://www.cacrep.org/2009standards.html
Daniels, J. A., & Larson, L. M. (2001). The impact of performance feedback on counseling self-efficacy and counselor anxiety. Counselor Education and Supervision, 41, 120–130. doi:10.1002/j.1556-6978.2001.tb01276.x
Friedlander, M. L., & Snyder, J. (1983). Trainees’ expectations for the supervisory process: Testing a developmental model. Counselor Education and Supervision, 22, 342–348. doi:10.1002/j.1556-6978.1983.tb01771.x
Gall, M. D., Gall, J. P., & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston, MA: Pearson/Allyn & Bacon.
Granello, D. H., & Young, M. E. (2012). Counseling today: Foundations of professional identity. Upper Saddle River, NJ: Pearson.
Greenhouse, S. W., & Geisser, S. (1959). On methods in the analysis of profile data. Psychometrika, 24, 95–112.
Hagedorn, W. B., Culbreth, J. R., & Cashwell, C. S. (2012). Addiction counseling accreditation: CACREP’s role in solidifying the counseling profession. The Professional Counselor, 2, 124–133. doi:10.15241/wbh.2.2.124
Hill, C. E., Roffman, M., Stahl, J., Friedman, S., Hummel, A., & Wallace, C. (2008). Helping skills training for undergraduates: Outcomes and prediction of outcomes. Journal of Counseling Psychology, 55, 359–370. doi:10.1037/0022-0167.55.3.359
Hoaglin, D. C., & Iglewicz, B. (1987). Fine-tuning some resistant rules for outlier labeling. Journal of the American Statistical Association, 82, 1147–1149. doi:10.1080/01621459.1987.10478551
Hoaglin, D. C., Iglewicz, B., & Tukey, J. W. (1986). Performance of some resistant rules for outlier labeling. Journal of the American Statistical Association, 81, 991–999. doi:10.1080/01621459.1986.10478363
Hoffman, M. A., Hill, C. E., Holmes, S. E., & Freitas, G. F. (2005). Supervisor perspective on the process and outcome of giving easy, difficult, or no feedback to supervisees. Journal of Counseling Psychology, 52, 3–13. doi:10.1037/0022-0167.52.1.3
Kozina, K., Grabovari, N., De Stefano, J., & Drapeau, M. (2010). Measuring changes in counselor self-efficacy: Further validation and implications for training and supervision. The Clinical Supervisor, 29, 117–127. doi:10.1080/07325223.2010.517483
Lambie, G. W., & Sias, S. M. (2009). An integrative psychological developmental model of supervision for professional school counselors-in-training. Journal of Counseling & Development, 87, 349–356. doi:10.1002/j.1556-6678.2009.tb00116.x
Larson, L. M. (1998). The social cognitive model of counselor training. The Counseling Psychologist, 26, 219–273.
Larson, L. M., & Daniels, J. A. (1998). Review of the counseling self-efficacy literature. The Counseling Psychologist, 26, 179–218. doi:10.1177/0011000098262001
Larson, L. M., Suzuki, L. A., Gillespie, K. N., Potenza, M. T., Bechtel, M. A., & Toulouse, A. L. (1992). Development and validation of the counseling self-estimate inventory. Journal of Counseling Psychology, 39, 105. doi:10.1037/0022-0167.39.1.105
McAuliffe, G., & Eriksen, K. (Eds.). (2011). Handbook of counselor preparation: Constructivist, developmental, and experiential approaches. Thousand Oaks, CA: Sage.
Melchert, T. P., Hays, V. L., Wiljanen, L. M., & Kolocek, A. K. (1996). Testing models of counselor development with a measure of counseling self-efficacy. Journal of Counseling & Development, 74, 640–644. doi:10.1002/j.1556-6676.1996.tb02304.x
Mullen, P. R., Lambie, G. W., & Conley, A. H. (2014). Development of the ethical and legal issues in counseling self-efficacy scale. Measurement and Evaluation in Counseling and Development, 47, 62–78. doi:10.1177/0748175613513807
Osborne, J. W., & Overbay, A. (2004). The power of outliers (and why researchers should always check for them). Practical Assessment, Research & Evaluation, 9(6). Retrieved from http://pareonline.net/getvn.asp?v=9&n=6
Pasquariello, C. D. (2013). Enhancing self-efficacy in the utilization of physical activity counseling: An online constructivist approach with psychologists-in-training. (Unpublished doctoral dissertation). Virginia Commonwealth University, Richmond, VA.
Rønnestad, M. H., & Skovholt, T. M. (2003). The journey of the counselor and therapist: Research findings and perspectives on professional development. Journal of Career Development, 30, 5–44. doi:10.1177/089484530303000102
Sink, C. A., & Stroh, H. R. (2006). Practical significance: The use of effect sizes in school counseling research. Professional School Counseling, 9, 401–411.
Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124, 240–261. doi:10.1037/0033-2909.124.2.240
Streiner, D. L. (2003). Starting at the beginning: An introduction to coefficient alpha and internal consistency. Journal of Personality Assessment, 80, 99–103. doi:10.1207/S15327752JPA8001_18
Sutton, J. M., Jr., & Fall, M. (1995). The relationship of school climate factors to counselor self-efficacy. Journal of Counseling & Development, 73, 331–336. doi:10.1002/j.1556-6676.1995.tb01759.x
Tang, M., Addison, K. D., LaSure-Bryant, D., Norman, R., O’Connell, W., & Stewart-Sicking, J. A. (2004). Factors that influence self-efficacy of counseling students: An exploratory study. Counselor Education and Supervision, 44, 70–80. doi:10.1002/j.1556-6978.2004.tb01861.x
Patrick R. Mullen, NCC, is an Assistant Professor at East Carolina University. Olivia Uwamahoro, NCC, is a doctoral candidate at the University of Central Florida. Ashley J. Blount, NCC, is a doctoral candidate at the University of Central Florida. Glenn W. Lambie, NCC, is a Professor at the University of Central Florida. Correspondence can be addressed to Patrick R. Mullen, 225A Ragsdale Bldg., Mail Stop 121, Greenville, NC 27858, mullenp14@ecu.edu.
Dec 2, 2014 | Article, Volume 4 - Issue 5
Bryn E. Schiele, Mark D. Weist, Eric A. Youngstrom, Sharon H. Stephan, Nancy A. Lever
Counseling self-efficacy (CSE), defined as one’s beliefs about his or her ability to effectively counsel a client, is an important precursor of effective clinical practice. While research has explored the association of CSE with variables such as counselor training, aptitude and level of experience, little attention has been paid to CSE among school mental health (SMH) practitioners. This study examined the influence of quality training (involving quality assessment and improvement, modular evidence-based practices, and family engagement/empowerment) versus peer support and supervision on CSE in SMH practitioners, and the relationship between CSE and practice-related variables. ANCOVA indicated similar mean CSE changes for counselors receiving the quality training versus peer support. Regression analyses indicated that regardless of condition, postintervention CSE scores significantly predicted quality of practice, knowledge of evidence-based practices (EBP) and use of EBP specific to treating depression. Results emphasize the importance of CSE in effective practice and the need to consider mechanisms to enhance CSE among SMH clinicians.
Keywords: self-efficacy, school mental health, evidence-based practices, counselor training, depression
There are major gaps between the mental health needs of children and adolescents and the availability of effective services to meet such needs (Burns et al., 1995; Kataoka, Zhang, & Wells, 2002). This recognition is fueling efforts to improve mental health services for youth in schools (Mellin, 2009; Stephan, Weist, Kataoka, Adelsheim, & Mills, 2007). At least 20% of all youth have significant mental health needs, with roughly 5% experiencing substantial functional impairment (Leaf, Schultz, Kiser, & Pruitt, 2003). Further, less than one third of children with such mental health needs receive any services at all.
The President’s New Freedom Commission on Mental Health (2003) documented the position of schools as a point of contact and universal natural setting for youth and families, recognizing schools as a key factor in the transformation of child and adolescent mental health services (Stephan et al., 2007). In the past 2 decades, there has been a significant push for full-service schools that expand beyond a sole focus on education, and employ community mental health practitioners to respond to the emotional and behavioral needs of students (Conwill, 2003; Dryfoos, 1993; Kronick, 2000). The education sector is the most common provider of mental health services for children and adolescents (Farmer, Burns, Phillips, Angold, & Costello, 2003), with 70%–80% of youth who receive any mental health services obtaining them at school (Burns et al., 1995; Rones & Hoagwood, 2000). Therefore, attention must be paid to the quantity, quality and effectiveness of school mental health (SMH) services.
School Mental Health
In recent years, SMH programs, supported by both school staff (e.g., school psychologists, social workers, counselors) and school-based community mental health clinicians, have emerged as a promising approach to the provision of mental health services for students and families (Weist, Evans, & Lever, 2003). The growth of these programs has facilitated investigation of what constitutes high-quality SMH service provision (Nabors, Reynolds, & Weist, 2000; Weist et al., 2005). This work has been supported and furthered by the Center for School Mental Health, a federally funded technical assistance and training program to advance SMH programs within the United States. In collaboration with other SMH centers (e.g., UCLA Center for Mental Health in Schools) and interdisciplinary networks focused on school health, consensus was reached to develop a guiding framework defining best practices in SMH (Weist et al., 2005). These principles call for appropriate service provision for children and families, implementation of interventions to meet school and student needs, and coordination of mental health programs in the school with related community resources, among other things. For further explication of the framework and its development, see Weist et al. (2005).
Simultaneously, research developments through the Center for School Mental Health facilitated implementation of modular evidence-based practices (EBP; see Chorpita, Becker & Daleiden, 2007; Chorpita & Daleiden, 2009). A modular approach for intervention involves training clinicians in core, effective strategies for disorders frequently encountered in children (e.g., attention-deficit/hyperactivity disorder [ADHD], anxiety, depression, disruptive behavior disorders [DBD]). This approach enables individualized, flexible implementation of evidence-based strategies without the constraints of a manualized approach (Curry & Reinecke, 2003). The third guiding component to enhance quality in SMH practices is development of strategies to effectively engage and empower families (see Hoagwood, 2005).
Despite the development of such a framework, SMH clinicians often struggle to implement high-quality, evidence-based services (Evans et al., 2003; Evans & Weist, 2004). These clinicians are constrained by a lack of sufficient time, training in EBP, appropriate supervision, and internal and external resources (Shernoff, Kratchowill & Stoiber, 2003). For instance, a survey by Walrath et al. (2004) of Baltimore SMH clinicians suggested that the ratio of clinicians to students was 1:250, and in order to meet the mental health needs of students, clinicians would have to increase clinical hours by 79 per week to remediate student difficulties. Additionally, the school environment is often characterized as chaotic, hectic and crisis-driven (Langley, Nadeem, Kataoka, Stein, & Jaycox, 2010), with SMH clinicians citing difficulties implementing EBP given the schedules of students. As a result of the challenges limiting use of EBP in daily SMH practice, researchers are now evaluating the influences on successful delivery of EBP in schools, including the personal qualities of SMH professionals (e.g., attitudes, beliefs, skills, training; Berger, 2013), as well as environmental factors (e.g., school administrative support, access to community resources, sufficient space for practice; Powers, Edwards, Blackman & Wegmann, 2013) that may predict high-quality services (see Weist et al., 2014).
Previous work examining factors related to the provision of evidence-based SMH services by SMH clinicians suggested that the highest-rated facilitators of effective SMH practice were personal characteristics (e.g., desire to deliver mental health services), attitudes and openness toward use of EBP, and adequate training (Beidas et al., 2012; Langley et al., 2010). Alternatively, SMH clinicians reported a number of administrative, school site and personal barriers as significant obstacles to appropriate service delivery; such barriers include lack of sufficient training, overwhelming caseload, job burnout and personal mental health difficulties (Langley et al., 2010; Suldo, Friedrich, & Michalowski, 2010).
While researchers have evaluated the influence of SMH provider personal characteristics in relation to the delivery of high-quality SMH services, little attention has been paid to the importance of counseling self-efficacy (CSE). CSE is widely accepted as an important precursor to competent clinical practice (Kozina, Grabovari, De Stefano, & Drapeau, 2010). Further, building CSE is considered an important strategy in active learning when providing training in evidence-based therapies (Beidas & Kendall, 2010), and CSE in EBP is believed to be essential to implementation (Aarons, 2005). However, researchers have yet to systematically include measures of CSE in studies of EBP utilization by SMH providers.
Self-Efficacy
Social-cognitive theory and its central construct, self-efficacy, have received much attention in the psychological literature, with more than 10,000 studies including these as central variables in the past 25 years (Judge, Jackson, Shaw, Scott, & Rich, 2007). Self-efficacy is defined as an individual’s beliefs about his or her ability to achieve desired levels of performance (Bandura, 1994), and it plays a key role in the initiation and maintenance of human behavior (Iannelli, 2000). Given the influence of self-efficacy expectancies on performance, researchers have evaluated how self-efficacy impacts a variety of action-related domains, including career selection (e.g., Branch & Lichtenberg, 1987; Zeldin, Britner, & Pajares, 2008), health-behavior change (e.g., Ramo, Prochaska, & Myers, 2010; Sharpe et al., 2008) and work-related performance (e.g., Judge et al., 2007; Stajkovic & Luthans, 1998). Specific to the mental health field, previous investigations have focused on how self-efficacy is related to counseling performance.
Counseling Self-Efficacy
The construct of CSE is defined as an individual’s beliefs about his or her ability to effectively counsel a client in the near future (Larson & Daniels, 1998). Studies of the structure and influence of CSE among a variety of mental health professionals, including counseling trainees, master’s-level counselors, psychologists, school counselors and students from related professions (e.g., clergy, medicine) have yielded mixed findings. Social desirability, counselor personality, aptitude, achievement (Larson et al., 1992) and counselor age (Watson, 2012) have shown small to moderate associations with CSE. CSE also is related to external factors, including the perceived and objective work environment, supervisor characteristics, and level or quality of supervision (Larson & Daniels, 1998).
However, the relationship of CSE with level of training is unclear. For the most part, CSE is stronger for individuals with at least some counseling experience than for those with none (Melchert, Hays, Wiljanen, & Kolocek, 1996; Tang et al., 2004). While the amount of training and education obtained have been reported as statistically significant predictors of degree of CSE (Larson & Daniels, 1998; Melchert et al., 1996), more recent work has not supported the existence of such predictive relationships (Tang et al., 2004). It also has been suggested that once a counselor has obtained advanced graduate training beyond the master’s level, the influence of experience on CSE becomes rather minimal (Larson, Cardwell, & Majors, 1996; Melchert et al., 1996; Sutton & Fall, 1995).
Some work has been done to evaluate interventions aimed at enhancing CSE by utilizing the four primary sources of self-efficacy, as defined by Bandura (1977; i.e., mastery, modeling, social persuasion, affective arousal). In two studies involving undergraduate recreation students, Munson, Zoerink & Stadulis (1986) found that modeling with role-play and visual imagery served to enhance CSE greater than a wait-list control group. Larson et al. (1999) attempted to extend these findings utilizing a sample of practicum counseling trainees, and found that self-evaluation of success in the session moderated the level of CSE postintervention (Larson et al., 1999), with perception of success significantly impacting the potency of the role-play scenarios. The same effect was not found for individuals in the videotape condition.
In addition to impacting clinician performance, CSE has been reported to indirectly impact positive client outcome (Urbani et al., 2002); for example, CSE has been associated with more positive outcomes for clients, more positive self-evaluations and fewer anxieties regarding counseling performance (Larson & Daniels, 1998). Thus, increasing CSE, which decreases clinicians’ anxiety, is important for client outcomes, as anxiety is reported to decrease level of clinical judgment and performance (Urbani et al., 2002). While there is some evidence that CSE is influential for client outcomes, minimal work has been done to evaluate this relationship.
CSE has been evaluated in a variety of samples; however, little work has been done to evaluate CSE of SMH practitioners and the factors that play into its development. Additionally, although some investigation has been conducted on factors that impact SMH practitioners’ abilities and performance, CSE is an element that seldom has been studied.
The current study aimed to examine the influence of a quality assessment and improvement (QAI) intervention on CSE in SMH practitioners, as well as the importance of CSE in regard to practice-related domains. The primary question of interest was, Does an intervention focused on QAI (target) result in higher levels of CSE than a comparison condition involving a focus on professional wellness (W) and supervision (control)? We investigated the influence of differential quality training and supervision on one’s level of CSE by comparing postintervention CSE scores between each condition after evaluating preintervention equivalency of CSE levels. Thus, we hypothesized that long-term exposure to the QAI intervention, family engagement/empowerment and modular EBP would result in significantly higher reports of CSE from those exposed to the QAI intervention than those exposed to the comparison intervention. Based on previous research, it is possible that specific counselor characteristics (e.g., age, experience) would predict CSE, such that individuals who are older and have more experience counseling children and adolescents would have higher CSE (Melchert et al., 1996; Tang et al., 2004; Watson, 2012). Thus, when evaluating training effects, these variables were included as covariates in the analysis of the relation between CSE and training.
Secondarily, this study aimed to evaluate the relation of professional experiences to CSE following exposure to the intervention. For this aim, the research question was, Does postintervention level of CSE predict quality of self-reported SMH practice, as well as knowledge and use of EBP? We hypothesized that level of CSE would predict quality of SMH practice, as well as attitude toward, knowledge and use of EBP regardless of intervention condition.
Method
This article stems from a larger previous evaluation of a framework to enhance the quality of SMH (Weist et al., 2009), funded by the National Institute of Mental Health (#1R01MH71015; 2003-2007; M. Weist, PI). As a part of a 12-year research program on quality and EBP in SMH, researchers conducted a two-year, multisite (from community agencies in Delaware, Maryland, Texas) randomized controlled trial of a framework for high-quality and effective practice in SMH (EBP, family engagement/empowerment and systematic QAI) as compared to an enhanced treatment as usual condition (focused on personal and school staff wellness). Only the methods pertaining to the aims of the current study have been included here (see Stephan et al., 2012; Weist et al., 2009 for more comprehensive descriptions).
Participants
A sample of 72 SMH clinicians (i.e., clinicians employed by community mental health centers to provide clinical services within the school system) from the three SMH sites participated for the duration of the study (2004–2006), and provided complete data for all study measures via self-report. All clinicians were employed by community-based agencies with an established history of providing SMH prevention and intervention services to elementary, middle and high school students in both general and special education programs.
A total of 91 clinicians participated over the course of the study, with a sample size of 64 in Year 1 and 66 in Year 2, with 27 clinicians involved only in Year 2. Out of the Year 1 sample (35 QAI and 29 W), 24 participants did not continue into Year 2 (13 QAI and 11 W). Dropout showed no association with nonparticipation and did not differ between conditions (37% QAI versus 38% comparison dropout rate). Investigations in this particular study focused on individuals who had completed at least one year of the study and had submitted pre- and postintervention measures. The 72 participants were predominantly female (61 women, 11 men) and were 36 years old on average (SD = 11.03). In terms of race and ethnicity, participants identified as Caucasian (55%), African American (26%), Hispanic (18%) and Other (1%). Participants reported the following educational levels: graduate degree (83%), some graduate coursework (13%), bachelor’s degree (3%), and some college (1%). In terms of experience, clinicians had roughly 6 years of prior experience and had worked for their current agency for 3 years on average. The obtained sample is reflective of SMH practitioners throughout the United States (Lewis, Truscott, & Volker, 2008).
Measures
Counseling self-efficacy. Participants’ CSE was measured using the Counselor Self-Efficacy Scale (Sutton & Fall, 1995). The measure was designed to be used with school counselors, and was created using a sample of public school counselors in Maine. Sutton and Fall modified a teacher efficacy scale (Gibson & Dembo, 1984), resulting in a 33-item measure that reflected CSE and outcome expectancies. Results of a principal-component factor analysis demonstrated initial construct validity, indicating a three-factor structure, with the internal consistency of these three factors reported as adequate (.67–.75). However, the structure of the measure has received criticism, with some researchers arguing that the third factor does not measure outcome expectancies as defined by social-cognitive theory (Larson & Daniels, 1998). Thus, we made a decision to use the entire 33-item scale as a measure of overall CSE. Respondents were asked to rate each item using a 6-point Likert scale (1 = strongly disagree, 6 = strongly agree). We made slight language modifications to make the scale more applicable to the work of this sample (Weist et al., 2009); for instance, guidance program became counseling program. CSE was measured in both conditions at the beginning and end of Years 1 and 2 of the intervention program.
Quality of school mental health services. The School Mental Health Quality Assessment Questionnaire (SMHQAQ) is a 40-item research-based measure developed by the investigators of the larger study to assess 10 principles for best practice in SMH (Weist et al., 2005; Weist et al., 2006), including the following: “Programs are implemented to address needs and strengthen assets for students, families, schools, and communities” and “Students, families, teachers and other important groups are actively involved in the program’s development, oversight, evaluation, and continuous improvement.”
At the end of Year 2, clinicians rated the degree to which each principle was present in their own practice on a 6-point Likert scale, ranging from not at all in place to fully in place. Given that results from a principle components analysis indicated that all 10 principles weighed heavily on a single strong component, analyses focused primarily on total scores of the SMHQAQ. Aside from factor analytic results, validity estimates are unavailable. Internal consistency as measured by coefficient alpha was very strong (.95).
Knowledge and use of evidence-based practices. The Practice Elements Checklist (PEC) is based on the Hawaii Department of Health’s comprehensive summary of top modular EBP elements (Chorpita & Daleiden, 2007). Principal investigators of the larger study created the PEC in consultation with Bruce Chorpita of the University of California, Los Angeles, an expert in mental health technologies for children and adolescents. The PEC asks clinicians to provide ratings of the eight skills found most commonly across effective treatments for four disorder areas (ADHD, DBD, depression and anxiety). Respondents used a 6-point Likert scale to rate both current knowledge of the practice element (1= none and 6 = significant), as well as frequency of use of the element in their own practice, and frequency with which the clinician treats children whose primary presenting issue falls within one of the four disorder areas (1 = never, 6 = frequently).
In addition to total knowledge and total frequency subscales (scores ranging from 4–24), research staff calculated four knowledge and four frequency subscale scores (one for each disorder area) by averaging responses across practice elements for each disorder area (scores ranging from 1–6). Clinicians also obtained total PEC score by adding all subscale scores, resulting in a total score ranging from 16–92. Although this approach resulted in each item being counted twice, it also determined how total knowledge and skill usage are related to CSE, as well as skills in specific disorder areas. While internal consistencies were found to be excellent for each of the subscales, ranging from .84–.92, validity of the measure has yet to be evaluated. Clinicians completed the PEC at end of Year 2.
Study Design
SMH clinicians were recruited from their community agencies approximately 1 month prior to the initial staff training. After providing informed consent, clinicians completed a set of questionnaires, which included demographic information, level of current training and CSE, and were randomly assigned to the QAI intervention or the W intervention. Four training events were provided for participants in both conditions (at the beginning and end of both Years 1 and 2). During the four training events, individuals in the QAI condition received training in the three elements reviewed previously. For individuals involved in the W (i.e., comparison) condition, training events focused on general staff wellness, including stress management, coping strategies, relaxation techniques, exercise, nutrition and burnout prevention.
At each site, senior clinicians (i.e., licensed mental health professionals with a minimum of a master’s degree and 3 years experience in SMH) were chosen to serve as project supervisors for the condition to which they were assigned. These clinicians were not considered participants, and maintained their positions for the duration of the study. Over the course of the project, each research supervisor dedicated one day per week to the study, and was assigned a group of roughly 10 clinicians to supervise. Within the QAI condition, supervisors held weekly group meetings with small groups of five clinicians to review QAI processes and activities in their schools, as well as strategies for using the evidence base; in contrast, there was no study-related school support for staff in the W condition.
Results
Preliminary Analyses and Scaling
Analyses were conducted using SPSS, version 20; tests of statistical significance were conducted with a Bonferroni correction (Cohen, Cohen, West, & Aiken, 2003), resulting in the use of an alpha of .0045, two-tailed. To facilitate comparisons between variables, staff utilized a scaling method known as Percentage of Maximum Possible (POMP) scores, developed by Cohen, Cohen, Aiken, & West (1999). Using this method, raw scores are transformed so that they range from zero to 100%. This type of scoring makes no assumptions about the shape of the distributions, in contrast to z scores, for which a normal distribution is assumed. POMP scores are an easily understood and interpreted metric and cumulatively lead to a basis for agreement on the size of material effects in the domain of interest (i.e., interventions to enhance quality of services and use of EBP; Cohen et al., 1999).
Primary Aim
Initial analyses confirmed retreatment equivalence for the two conditions, t (72) = –.383, p = .703. For individuals in the QAI condition, preintervention CSE scores averaged at 71.9% of maximum possible (SD = .09), while those in the comparison condition averaged at 71.3% of maximum possible (SD = .08). These scores were comparable to level of CSE observed in counseling psychologists with similar amounts of prior experience (Melchert et al., 1996).
Correlation analyses suggested that pretreatment CSE was significantly associated with age (r = .312, p = .008), race (r = –.245, p = .029), years of counseling experience (r = .313, p = .007) and years with the agency (r = .232, p = .048). Thus, these variables were included as covariates in an analysis of covariance (ANCOVA) evaluating changes in CSE between the QAI and comparison conditions. Results suggested a nonsignificant difference in change in CSE from pre- to postintervention between conditions, F (72) = .013, p = .910. For individuals in the QAI condition, postintervention CSE scores averaged at 73.1% of maximum possible (SD = .07), and for individuals in the comparison condition, CSE scores averaged at 72.8% of maximum possible (SD = .08). Additionally, when looking across conditions, results indicated a nonsignificant difference in change in level of CSE from pre- to postintervention, F (72) = .001, p = .971. Across conditions, clinicians reported roughly similar levels of CSE at pre- and postintervention time points (72% vs. 73% of maximum possible); see Table 1.
Table 1
Analysis of Covariance (ANCOVA) Summary of Change in CSE
Source
|
df
|
F
|
p
|
Partial η2
|
|
|
|
|
|
CSE |
1
|
.001
|
.971
|
.000
|
CSE*Condition |
1
|
.013
|
.910
|
.000
|
CSE*Age |
1
|
.281
|
.598
|
.004
|
CSE*Race |
1
|
1.190
|
.279
|
.018
|
CSE*Years of Experience |
1
|
.032
|
.859
|
.000
|
CSE*Years with Agency |
1
|
.003
|
.955
|
.000
|
Error |
66
|
|
|
|
Note. N = 72.
Secondary Aim
To investigate the influence of level of CSE on quality and practice elements in counseling, a series of individual regressions were conducted with level of postintervention CSE as the predictor variable, and indicators of attitudes toward EBP, knowledge and use of EBP, and use of quality mental health services as the outcome variables in separate analyses.
Table 2 shows that level of postintervention CSE significantly predicted the following postintervention variables: SMHQAQ quality of services (R2 = .328, F [60] = 29.34, p < .001); knowledge of EBP for ADHD (R2 = .205, F [46] = 11.54, p = .001), depression (R2 = .288, F [46]= 18.17, p < .001), DBD (R2 = .236, F [46]= 13.92, p = .001) and anxiety (R2 = .201, F [46]= 10.81, p = .002); usage of EBP specific to treating depression (R2 = .301, F [46]= 19.34, p < .001); and total knowledge of EBP (R2 = .297, F [44] = 18.20, p < .001). Results further indicated that postintervention CSE was not a significant predictor of usage of EBP for ADHD (R2 = .010, F [45] = .457, p = .502), DBD (R2 = .024, F [45] = 1.100, p = .300) and anxiety (R2 = .075, F [43] = 3.487, p = .069); and total usage of EBP (R2 = .090, F [43] = 4.244, p = .045).
Table 2
Results of Linear Regressions Between Level of Postintervention CSE and Outcome Variables
Variables
|
Beta
|
R2
|
Adjusted R2
|
F
|
p
|
|
|
|
|
|
|
SMH Quality |
0.573
|
0.328
|
0.317
|
29.337
|
0.000
|
EBP ADHD – Knowledge |
0.452
|
0.205
|
0.187
|
11.583
|
0.001
|
EBP ADHD – Usage |
0.100
|
0.010
|
–0.012
|
0.457
|
0.502
|
EBP Depression – Knowledge |
0.536
|
0.288
|
0.272
|
18.168
|
0.000
|
EBP Depression – Usage |
0.548
|
0.301
|
0.285
|
19.337
|
0.000
|
EBP DBD – Knowledge |
0.486
|
0.236
|
0.219
|
13.922
|
0.001
|
EBP DBD – Usage |
0.154
|
0.024
|
0.002
|
1.100
|
0.300
|
EBP Anxiety – Knowledge |
0.448
|
0.201
|
0.182
|
10.811
|
0.002
|
EBP Anxiety – Usage |
0.274
|
0.075
|
0.053
|
3.487
|
0.069
|
EBP Total Knowledge |
0.545
|
0.297
|
0.281
|
18.197
|
0.000
|
EBP Total Usage |
0.300
|
0.900
|
0.069
|
4.244
|
0.045
|
Note. To control for experiment-wise error, a Bonferroni correction was used and significance was evaluated at the 0.0045 level.
Discussion
While there has been some previous examination of the association between training and CSE, results have been mixed (see Larson & Daniels, 1998), and no such evaluations have been conducted within the context of SMH services. The current study stemmed from a larger evaluation of a framework to enhance the quality of SMH, targeting quality service provision, EBP, and enhancement of family engagement and empowerment (see Weist et al., 2009).
The present study had two primary aims. The first goal was to evaluate differences in level of CSE from pre- to postintervention between two groups of SMH clinicians. We expected that those who received information, training and supervision on QAI and best practice in SMH would report higher levels of CSE postintervention than those in the W condition. The secondary aim was to evaluate whether clinician reports of postintervention CSE would serve as predictors of quality of SMH practice, as well as knowledge and use of EBP. Given the influence that clinician CSE has been found to have on practice-related variables in previous studies (see Larson & Daniels, 1998), we hypothesized that higher level of CSE would significantly predict higher quality of SMH practice, and knowledge and usage of EBP.
Controlling for age, race, years of experience and years with the agency, findings did not confirm the primary hypothesis. No statistically significant differences in clinician reports of CSE from pre- to postintervention were observed between the QAI and W conditions. Regarding the secondary aim, however, clinician postintervention level of CSE was found to serve as a significant predictor of quality of practice; total knowledge of EBP specific to treating ADHD, DBD, anxiety and depression; and usage of EBP specific to treating depression. Findings are consistent with previous literature suggesting that CSE levels influence performance in a number of practice-related domains (Larson & Daniels, 1998).
Results did not support a significant predictive relation between CSE level and usage of EBP specific to treating ADHD, DBD and anxiety. The failure to find an association may be due to evaluating level of usage of EBP across conditions due to limited power to run the analyses by condition. Results from the original study suggested that individuals in the QAI condition were more likely to use established EBP in treatment (see Weist et al., 2009). Thus, as provider characteristics including CSE (Aarons, 2005) are known to be associated with adoption of EBP, it may be that examining these associations across conditions resulted in null findings.
While current results did support the importance of high CSE regarding practice-related domains, there was no significant difference in level of CSE between those who received information, training and supervision in QAI; use of EBP; and family engagement and empowerment compared to those in the W condition. Findings from the current study contrast with other research that has documented improvements in CSE following targeted interventions. Previous targeted interventions to increase CSE have resulted in positive outcomes when using micro-skills training and mental practice (Munson, Stadulis, & Munson, 1986; Munson, Zoerink, & Stadulis, 1986), role-play and visual imagery (Larson et al., 1999), a prepracticum training course (Johnson, Baker, Kopala, Kiselica, & Thompson, 1989) and practicum experiences (Larson et al., 1993).
As a curvilinear relation is reported to exist between CSE and level of training (Larson et al., 1996; Sutton & Fall, 1995), it may be that the amount of previous training and experience of this sample of clinicians, being postlicensure, was such that the unique experiences gained through the QAI and W conditions in the current study had a minimal impact on overall CSE. Many prior studies utilized students untrained in counseling and interpersonal skills (Munson, Zoerink & Stadulis, 1986) and beginning practicum students and trainees (Easton, Martin, & Wilson, 2008; Johnson et al., 1989; Larson et al., 1992, 1993, 1999). Regarding the usefulness of a prepracticum course and practicum experiences for level of CSE, significant increases were only observed in the beginning practicum students with no significant changes seen in advanced students. Additionally, no previous studies have evaluated the success of CSE interventions with clinicians postlicensure.
It also is plausible that failure to detect an effect was due to the high preintervention levels of CSE observed across clinicians. At baseline, clinicians in the QAI condition reported CSE levels of roughly 71.9% of maximum potential, whereas those in the W condition reported CSE levels of 71.3% of maximum potential. Previous research has found high levels of CSE among practitioners with comparable amounts of previous experience, with those having 5–10 years of experience reporting mean CSE levels of 4.35 out of five points possible (Melchert et al., 1996). Thus, the average level of CSE may be accounted for by the amount of previous education and training reported by clinicians, and the observed increase of 1.5% at postintervention may be a reflection of the sample composition.
Limitations
Due to a small sample size, the power to detect changes in CSE was modest. Because of efforts to increase power by increasing the sample size, the time between reports of pre- and postintervention levels of CSE varied within the sample. Some participants completed only a year or a year and a half instead of the full 2 years.
A further limitation was reliance on self-reported information from the participating clinicians regarding their level of CSE, quality of practice, and knowledge and usage of EBP. Thus, a presentation bias may have been present in that clinicians may have reported stronger confidence in their own abilities than they felt in reality, or may have inflated responses on their knowledge and usage of EBP.
An additional limitation concerns the fact that CSE was not included as an explicit factor in training. Increasing CSE was not an explicit goal, and training and supervision were not tailored so that increases in CSE were more likely. The relation between supervisory feedback and CSE also may depend on the developmental level and pretraining CSE level of the clinicians (Larson et al., 1999; Munson, Zoerink & Stadulis, 1986), with untrained individuals reporting large increases. Thus, increased performance feedback may or may not have enhanced CSE within this sample.
Future Directions
Based on these findings, future work is suggested to evaluate ways in which CSE can be increased among clinicians. As the training procedures utilized in this study failed to change CSE, it is important to determine what facets of CSE, if any, are conducive to change. Although the current study evaluated broad CSE, Bandura (1977) theorized that overall self-efficacy is determined by the efficacy and outcome expectancies an individual has regarding a particular behavior. Efficacy expectancies are individuals’ beliefs regarding their capabilities to successfully perform the requisite behavior. Efficacy expectancies serve mediational functions between individuals and their behavior, such that if efficacy expectancies are high, individuals will engage in the behavior because they believe that they will be able to successfully complete it. Outcome expectancies, on the other hand, involve individuals’ beliefs that a certain behavior will lead to a specific outcome, and mediate the relation between behaviors and outcomes. Therefore, when outcome expectancies are low, individuals will not execute that behavior because they do not believe it will lead to a specified outcome.
As with the current study, the majority of the existing studies investigating change in CSE have evaluated broad CSE without breaking the construct down into the two types of expectancies (i.e., efficacy expectancies and outcome expectancies). Larson and Daniels (1998) found that fewer than 15% of studies on CSE examined outcome expectancies, and of the studies that did, only 60% operationalized outcome expectancies appropriately. While clinicians may believe that they can effectively perform a counseling strategy, they may not implement said strategy if they do not believe that it will produce client change. Ways in which these concepts can be evaluated may include asking, for example, for level of confidence in one’s ability to effectively deliver relaxation training, as well as for level of confidence that relaxation training produces client change. Based on the dearth of work in this area, future efforts should involve breaking down CSE and correctly operationalizing efficacy expectancies and outcome expectancies to examine what sorts of influences these expectancies have on overall CSE.
Additionally, future efforts to investigate the enhancement of CSE may evaluate the pliability of this construct depending on level of training. Is CSE more stable among experienced clinicians compared to counseling trainees? Should CSE enhancement be emphasized among new clinicians? Or are different methods needed to increase one’s CSE depending on previous experience? This goal may be accomplished by obtaining sizeable, representative samples with beginning, moderate and advanced levels of training, and examining the long-term stability of CSE.
Future work should incorporate strategies of mastery, modeling, social persuasion and affective arousal to enhance the CSE of SMH clinicians. Although role-play was utilized in the current study, future interventions could include visual imagery or mental practice of performing counseling skills, discussions of CSE, and more explicit positive supervisory feedback. Furthermore, mastery experiences (i.e., engaging in a counseling session that the counselor interprets as successful) in actual or role-play counseling settings have been found to increase CSE (Barnes, 2004); however, this result is contingent on the trainee’s perception of session success (Daniels & Larson, 2001). Future efforts to enhance CSE could strategically test how to structure practice counseling sessions and format feedback in ways that result in mastery experiences for clinicians. Future investigations also may incorporate modeling strategies into counselor training, possibly within a group setting. Structuring modeling practices in a group rather than an individual format may facilitate a fluid group session, moving from viewing a skill set to practicing with other group members and receiving feedback. This scenario could provide counselors with both vicarious and mastery experiences.
The use of verbal persuasion—the third source of efficacy—to enhance CSE also has been evaluated in counseling trainees. Verbal persuasion involves communication of progress in counseling skills, as well as overall strengths and weaknesses (Barnes, 2004). While strength-identifying feedback has been found to increase CSE, identifying skills that need improvement has resulted in a decrease in CSE. Lastly, emotional arousal, otherwise conceptualized as anxiety, is theorized to contribute to level of CSE. As opposed to the aforementioned enhancement mechanisms, increases in counselor anxiety negatively predict counselor CSE (Hiebert, Uhlemann, Marshall, & Lee, 1998). Thus, it is not recommended that identification of skills that need improvement be utilized as a tactic to develop CSE. Finally, in addition to clinician self-ratings, future research should investigate CSE’s impact on performance as measured by supervisors, as well as clients. With growing momentum for SMH across the nation, it is imperative that all factors influencing client outcomes and satisfaction with services be evaluated, including CSE.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of
interest or funding contributions for
the development of this manuscript.
References
Aarons, G. A. (2005). Measuring provider attitudes toward evidence-based practice: Consideration of organizational context and individual differences. Child and Adolescent Psychiatric Clinics of North America, 14, 255–271. doi:10.1016/j.chc.2004.04.008
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. doi:10.1037/0033-295X.84.2.191
Bandura, A. (1994). Self-efficacy. In V. S. Ramachandran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71–81). New York, NY: Academic Press.
Barnes, K. L. (2004). Applying self-efficacy theory to counselor training and supervision: A comparison of two approaches. Counselor Education and Supervision, 44, 56–69. doi:10.1002/j.1556-6978.2004.tb01860.x
Beidas, R. S., & Kendall, P. C. (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice, 17, 1–30. doi:10.1111/j.1468-2850.2009.01187.x
Beidas, R. S., Mychailyszyn, M. P., Edmunds, J. M., Khanna, M. S., Downey, M. M., & Kendall, P. C. (2012). Training school mental health providers to deliver cognitive-behavioral therapy. School Mental Health, 4, 197–206. doi:10.1007/s12310-012-9047-0
Berger, T. K. (2013). School counselors’ perceptions practices and preparedness related to issues in mental health (Doctoral dissertation). Retrieved from http://hdl.handle.net/1802/26892
Branch, L. E., & Lichtenberg, J. W. (1987, August). Self-efficacy and career choice. Paper presented at the convention of the American Psychological Association, New York, NY.
Burns, B. J., Costello, E. J., Angold, A., Tweed, D., Stangl, D., Farmer, E. M., & Erkanli, A. (1995). Children’s mental health service use across service sectors. Health Affairs, 14, 147–159. doi:10.1377/hlthaff.14.3.147
Chorpita, B. F., Becker, K. D., & Daleiden, E. L. (2007). Understanding the common elements of evidence-based practice: Misconceptions and clinical examples. Journal of the American Academy of Child and Adolescent Psychiatry, 46, 647–652. doi:10.1097/chi.0b013e318033ff71
Chorpita, B. F., & Daleiden, E. L. (2009). CAMHD biennial report: Effective psychosocial interventions for youth with behavioral and emotional needs. Honolulu, HI: Child and Adolescent Mental Health Division, Hawaii Department of Health.
Cohen, P., Cohen, J., Aiken, L. S., & West, S. G. (1999). The problem of units and the circumstances for POMP. Multivariate Behavioral Research, 34, 315–346. doi:10.1207/S15327906MBR3403_2
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum.
Conwill, W. L. (2003). Consultation and collaboration: An action research model for the full-service school. Consulting Psychology Journal: Practice and Research, 55, 239–248. doi:10.1037/1061-4087.55.4.239
Curry, J. F., & Reinecke, M. A. (2003). Modular therapy for adolescents with major depression. In M. A. Reinecke, F. M. Dattilio, & A. Freeman (Eds.), Cognitive therapy with children and adolescents (2nd ed., pp. 95–127). New York, NY: Guilford.
Daniels, J. A., & Larson, L. M. (2001). The impact of performance feedback on counseling self-efficacy and counselor anxiety. Counselor Education and Supervision, 41, 120–130. doi:10.1002/j.1556-6978.2001.tb01276.x
Dryfoos, J. G. (1993). Schools as places for health, mental health, and social services. Teachers College Record, 94, 540–567.
Easton, C., Martin, W. E., Jr., & Wilson, S. (2008). Emotional intelligence and implications for counseling self-efficacy: Phase II. Counselor Education and Supervision, 47, 218–232. doi:10.1002/j.1556-6978.2008.tb00053.x
Evans, S. W., Glass-Siegel, M., Frank, A., Van Treuren, R., Lever, N. A., & Weist, M. D. (2003). Overcoming the challenges of funding school mental health programs. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.), Handbook of school mental health: Advancing practice and research (pp. 73–86). New York, NY: Kluwer Academic/Plenum.
Evans, S. W., & Weist, M. D. (2004). Implementing empirically supported treatments in the schools: What are we asking? Clinical Child and Family Psychology Review, 7, 263–267. doi:10.1007/s10567-004-6090-0
Farmer, E. M., Burns, B. J., Phillips, S. D., Angold, A., & Costello, E. J. (2003). Pathways into and through mental health services for children and adolescents. Psychiatric Services, 54, 60–66. doi:10.1176/appi.ps.54.1.60
Gibson, S., & Dembo, M. H. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76, 569–582. doi:10.1037/0022-0663.76.4.569
Hiebert, B., Uhlemann, M. R., Marshall, A., & Lee, D. Y. (1998). The relationship between self-talk, anxiety, and counselling skill. Canadian Journal of Counselling and Psychotherapy, 32, 163–171.
Hoagwood, K. E. (2005). Family-based services in children’s mental health: A research review and synthesis. Journal of Child Psychology and Psychiatry, 46, 690–713. doi:10.1111/j.1469-7610.2005.01451.x
Iannelli, R. J. (2000). A structural equation modeling examination of the relationship between counseling self-efficacy, counseling outcome expectations, and counselor performance. (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database (9988728).
Johnson, E., Baker, S. B., Kopala, M., Kiselica, M. S., & Thompson, E. C., III (1989). Counseling self-efficacy and counseling competence in prepracticum training. Counselor Education and Supervision, 28, 205–218. doi:10.1002/j.1556-6978.1989.tb01109.x
Judge, T. A., Jackson, C. L., Shaw, J. C., Scott, B. A., & Rich, B. L. (2007). Self-efficacy and work-related performance: The integral role of individual differences. Journal of Applied Psychology, 92, 107–127. doi:10.1037/0021-9010.92.1.107
Kataoka, S. H., Zhang, L., & Wells, K. B. (2002). Unmet need for mental health care among U.S. children: Variation by ethnicity and insurance status. American Journal of Psychiatry, 159, 1548–1555. doi:10.1176/appi.ajp.159.9.1548
Kozina, K., Grabovari, N., De Stefano, J., & Drapeau, M. (2010). Measuring changes in counselor self-efficacy: Further validation and implications for training and supervision. The Clinical Supervisor, 29, 117–127. doi:10.1080/07325223.2010.517483
Kronick, R. F. (Ed.). (2000). Human services and the full service school: The need for collaboration. Springfield, IL: Thomas.
Langley, A. K., Nadeem, E., Kataoka, S. H., Stein, B. D., & Jaycox, L. H. (2010). Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health, 2, 105–113. doi:10.1007/s12310-010-9038-1
Larson, L. M., Cardwell, T. R., & Majors, M. S. (1996, August). Counselor burnout investigated in the context of social cognitive theory. Paper presented at the meeting of the American Psychological Association, Toronto, Canada.
Larson, L. M., Clark, M. P., Wesley, L. H., Koraleski, S. F., Daniels, J. A., & Smith, P. L. (1999). Video versus role plays to increase counseling self-efficacy in prepractica trainees. Counselor Education and Supervision, 38, 237–248. doi:10.1002/j.1556-6978.1999.tb00574.x
Larson, L. M., & Daniels, J. A. (1998). Review of the counseling self-efficacy literature. The Counseling Psychologist, 26, 179–218. doi:10.1177/0011000098262001
Larson, L. M., Daniels, J. A., Koraleski, S. F., Peterson, M. M., Henderson, L. A., Kwan, K. L., & Wennstedt, L. W. (1993, June). Describing changes in counseling self-efficacy during practicum. Poster presented at the meeting of the American Association of Applied and Preventive Psychology, Chicago, IL.
Larson, L. M., Suzuki, L. A., Gillespie, K. N., Potenza, M. T., Bechtel, M. A., & Toulouse, A. L. (1992). Development and validation of the counseling self-estimate inventory. Journal of Counseling Psychology, 39, 105–120. doi:10.1037/0022-0167.39.1.105
Leaf, P. J., Schultz, D., Kiser, L. J., & Pruitt, D. B. (2003). School mental health in systems of care. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.), Handbook of school mental health programs: Advancing practice and research (pp. 239–256). New York, NY: Kluwer Academic/Plenum.
Lewis, M. F., Truscott, S. D., & Volker, M. A. (2008). Demographics and professional practices of school psychologists: A comparison of NASP members and non-NASP school psychologists by telephone survey. Psychology in the Schools, 45, 467–482. doi:10.1002/pits.20317
Melchert, T. P., Hays, V. L., Wiljanen, L. M., & Kolocek, A. K. (1996). Testing models of counselor development with a measure of counseling self-efficacy. Journal of Counseling & Development, 74, 640–644. doi:10.1002/j.1556-6676.1996.tb02304.x
Mellin, E. A. (2009). Responding to the crisis in children’s mental health: Potential roles for the counseling profession. Journal of Counseling & Development, 87, 501–506. doi:10.1002/j.1556-6678.2009.tb00136.x
Munson, W. W., Stadulis, R. E., & Munson, D. G. (1986). Enhancing competence and self-efficacy of potential therapeutic recreators in decision-making counseling. Therapeutic Recreation Journal, 20(4), 85–93.
Munson, W. W., Zoerink, D. A., & Stadulis, R. E. (1986). Training potential therapeutic recreators for self-efficacy and competence in interpersonal skills. Therapeutic Recreation Journal, 20, 53–62.
Nabors, L. A., Reynolds, M. W., & Weist, M. D. (2000). Qualitative evaluation of a high school mental health program. Journal of Youth and Adolescence, 29, 1–13.
Powers, J. D., Edwards, J. D., Blackman, K. F., & Wegmann, K.M. (2013). Key elements of a successful multi-system collaboration for school-based mental health: In-depth interviews with district and agency administrators. The Urban Review, 45, 651–670. doi:10.1007/s11256-013-0239-4
President’s New Freedom Commission on Mental Health. (2003). Achieving the Promise: Transforming Mental Health Care in America. Final Report for the President’s New Freedom Commission on Mental Health (SMA Publication No. 03-3832). Rockville, MD: President’s New Freedom Commission on Mental Health.
Ramo, D. E., Prochaska, J. J., & Myers, M. G. (2010). Intentions to quit smoking among youth in substance abuse treatment. Drug and Alcohol Dependence, 106, 48–51. doi:10.1016/j.drugalcdep.2009.07.004.
Rones, M., & Hoagwood, K. (2000). School-based mental health services: A research review. Clinical Child and Family Psychology Review, 3, 223–241. doi:10.1023/A:1026425104386
Sharpe, P. A., Granner, M. L., Hutto, B. E., Wilcox, S., Peck, L., & Addy, C. L. (2008). Correlates of physical activity among African American and white women. American Journal of Health Behavior, 32, 701–713. doi:10.5555/ajhb.2008.32.6.701.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2003). Training in evidence-based interventions (EBIs): What are school psychology programs teaching? Journal of School Psychology, 41, 467–483. doi:10.1016/j.jsp.2003.07.002
Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124, 240–261. doi:10.1037/0033-2909.124.2.240
Stephan, S. H., Weist, M., Kataoka, S., Adelsheim, S., & Mills, C. (2007). Transformation of children’s mental health services: The role of school mental health. Psychiatric Services, 58, 1330–1338. doi:10.1176/appi.ps.58.10.1330
Stephan, S., Westin, A., Lever, N., Medoff, D., Youngstrom, E., & Weist, M. (2012). Do school-based clinicians’ knowledge and use of common elements correlate with better treatment quality? School Mental Health, 4, 170–180. doi:10.1007/s12310-012-9079-8
Suldo, S. M., Friedrich, A., & Michalowski, J. (2010). Personal and systems-level factors that limit and facilitate school psychologists’ involvement in school-based mental health services. Psychology in the Schools, 47, 354–373. doi:10.1002/pits.20475
Sutton, J. M., Jr., & Fall, M. (1995). The relationship of school climate factors to counselor self-efficacy. Journal of Counseling & Development, 73, 331–336. doi:10.1002/j.1tb01759.x
Tang, M., Addison, K. D., LaSure-Bryant, D., Norman, R., O’Connell, W., & Stewart-Sicking, J. A. (2004). Factors that influence self-efficacy of counseling students: An exploratory study. Counselor Education and Supervision, 44, 70–80. doi:10.1002/j.1556-6978.2004.tb01861.x
Urbani, S., Smith, M. R., Maddux, C. D., Smaby, M. H., Torres-Rivera, E., & Crews, J. (2002). Skills-based training and counseling self-efficacy. Counselor Education and Supervision, 42, 92–106. doi:10.1002/j.1556-6978.2002.tb01802.x
Walrath, C. M., Bruns, E. J., Anderson, K. L., Glass-Siegal, M., & Weist, M. D. (2004). Understanding expanded school mental health services in Baltimore city. Behavior Modification, 28, 472–490. doi:10.1177/0145445503259501
Watson, J. C. (2012). Online learning and the development of counseling self-efficacy beliefs. The Professional Counselor, 2, 143–151.
Weist, M. D., Ambrose, M. G., & Lewis, C. P. (2006). Expanded school mental health: A collaborative community-school example. Children & Schools, 28, 45–50. doi:10.1093/cs/28.1.45
Weist, M. D., Evans, S. W., & Lever, N. A. (2003). Handbook of school mental health: Advancing practice and research. New York, NY: Kluwer Academic/Plenum.
Weist, M. D., Lever, N. A., Stephan, S. H., Anthony, L. G., Moore, E. A., & Harrison, B. R. (2006, February). School mental health quality assessment and improvement: Preliminary findings from an experimental study. Paper presented at the meeting of A System of Care for Children’s Mental Health: Expanding the Research Base, Tampa, FL.
Weist, M. D., Sander, M. A., Walrath, C., Link, B., Nabors, L., Adelsheim, S., . . . & Carrillo, K. (2005). Developing principles for best practice in expanded school mental health. Journal of Youth and Adolescence, 34, 7–13. doi:10.1007/s10964-005-1331-1
Weist, M., Lever, N., Stephan, S., Youngstrom, E., Moore, E., Harrison, B., . . . & Stiegler, K. (2009). Formative evaluation of a framework for high quality, evidence-based services in school mental health. School Mental Health, 1, 196–211. doi:10.1007/s12310-09-9018-5
Weist, M. D., Youngstrom, E. A., Stephan, S., Lever, N., Fowler, J., Taylor, L., . . . Hoagwood, K. (2014). Challenges and ideas from a research program on high-quality, evidence-based practice in school mental health. Journal of Clinical Child & Adolescent Psychology, 43, 244–255. doi:10.1080/15374416.2013.833097
Zeldin, A. L., Britner, S. L., & Pajares, F. (2008). A comparative study of the self-efficacy beliefs of successful men and women in mathematics, science, and technology careers. Journal of Research in Science Teaching, 45, 1036–1058. doi:10.1002/tea.20195
Bryn E. Schiele is a doctoral student at the University of South Carolina. Mark D. Weist is a professor at the University of South Carolina. Eric A. Youngstrom is a professor at the University of North Carolina at Chapel Hill. Sharon H. Stephan and Nancy A. Lever are associate professors at the University of Maryland. Correspondence can be addressed to Bryn E. Schiele, the Department of Psychology, Barnwell College, Columbia, SC 29208, schiele@email.sc.edu.