Student Selection, Development, and Retention: A Commentary on Supporting Student Success in Distance Counselor Education

Savitri Dixon-Saxon, Matthew R. Buckley


This article reviews relevant research that provides context for a commentary by two long-time distance counselor educators and supervisors with over 35 years of combined professional experience. The authors explore factors that support successful outcomes for graduate students within distance counselor education programs, which include how students are selected, supported in their development, and retained in the program. Discussion targets how distance learning promotes open access to students who historically have been marginalized, who are living in rural areas, and who have not had the same access to educational opportunities. We focus on the roles and responsibilities of institutional and program leadership and program faculty in the areas of building and sustaining a learning community, faculty engagement in and out of the classroom, and retention and gatekeeping of students. Finally, we discuss considerations for building and sustaining credibility within the university culture, supporting the specialized needs of a CACREP-accredited program, and managing the student–program relationship.


Keywords: student selection, student development, student retention, distance education, counselor education



Distance counselor education has evolved from a place of skepticism to an accepted and legitimate method of training master’s- and doctoral-level counselors and counselor educators and supervisors. Snow et al. (2018) noted that “Changing the minds of skeptical colleagues is challenging but naturally subject to improvement over time as online learning increases, matures, and becomes integrated into the fabric of counselor education” (p. 141). A foundational driver in this evolution has been the necessity of program stakeholders to be creative and innovative in using distance technology to achieve similar or sometimes better results than traditional, residence-based programs. In this article, we will address characteristics of students in distance counselor education programs, their specific needs, the concept of andragogy and adult learners, considerations for selecting and retaining distance learning students, the importance of supporting the development of digital competence, and orienting students to the distance program. Additionally, we will discuss the roles and responsibilities of institutional and program leadership and program faculty in three key areas related to optimal student development and program efficacy: community building, faculty presence and engagement in and out of the classroom, and student retention and gatekeeping. Finally, we raise considerations in building and sustaining credibility within the university culture, supporting the specialized needs of a program accredited by the Council for the Accreditation of Counseling and Related Educational Programs (CACREP), and managing the student–program relationship (Urofsky, 2013). In this article, we use the research literature on distance counselor education to support insights we have gained over 35 years of combined experience teaching and administrating online counselor education in a large for-profit institution. To avoid confusion, throughout this article we will be using the term distance counselor education as encompassing online learning, virtual learning, online counselor education, or other terms denoting distance learning in counselor education.


The thought of training counselors using distance education has stimulated incredulity in many counselor educators because of the nature of counselor education (Snow et al., 2018). The underlying concern was that students trained in distance education programs could not be adequately prepared because of the high-touch, interpersonal nature of counselor preparation in which students encountered faculty and supervisors in traditional face-to-face settings. For those venturing into this new frontier, the challenge was to create an effective combination of academic and experiential learning that would provide students with the appropriate foundation for practice to ensure that there were sufficient opportunities to observe and evaluate skills development and comportment. An outcome of distance counselor education was also the realization that offering students a more flexible higher education format was one of the best vehicles to increasing opportunity and access for students (Carlsen et al., 2016). Over the years, we have recognized that facilitating distance learning opportunities was one of the counseling profession’s greatest opportunities to create a more diverse workforce of counselors equipped to provide services in a myriad of traditionally underserved communities, strengthen and support counselors using a variety of technological tools in their work, and enhance students’ exposure to diversity, thereby creating a counseling workforce better able to practice cultural humility (Fisher-Borne et al., 2015; Shaw, 2016). This enhanced cultural competence happens in part because students engage with a widely diverse set of colleagues and faculty that represent various regions of the United States and the world and touch on the areas of socioeconomic, sociocultural, ethnic, spiritual, and religious domains in learners, practitioners, and clients. Essentially, we have recognized that distance education benefits both student and educator, consumer and provider, community and profession.


There have been significant advancements in best practices regarding student selection, development, and retention for distance counselor education. These advancements and modifications, however, need to align with the expectations and guidance of the 2014 ACA Code of Ethics (American Counseling Association [ACA], 2014) and the accreditation standards of CACREP, which also changed to accommodate distance counselor education preparation programs. Many of the best practices for student selection, development, and retention in distance education emerged from what counselor educators gleaned from traditional educational environments. In addition, curricular activities evolved and have been developed with a healthy respect for the interpersonal nature of educating counselors, while developing and utilizing technologies that could accomplish the same objectives achieved in traditional programs, even though the activities to accomplish those objectives are distinct. We have found that developing best practices for selection, development, and retention of counselor education students at a distance has resulted from working with and observing students and responding to their unique needs while balancing where we have “been.” Additionally, engaging in continuous dialogue with program stakeholders and using essential assessment data has helped us become better at meeting students’ needs in a distance education environment. An important aspect of developing best practices is understanding who our students are and what specialized needs they bring to their graduate work when enrolling in a distance counselor education program.


Understanding Our Students in Distance Counselor Education

The first generation of students who pursued distance counselor education were mostly older students, women, people with disabilities, working adults, and students who were more racially and ethnically diverse (Smith, 2014), and although those distinctions are not as clear now as they were a decade ago (Ortagus, 2017), responding to the needs of early distance education students informed counselor educators in creating a model of educating these students that met their educational and developmental needs. Programs committed to facilitating student access and inclusion discovered the need to adjust outdated thinking from traditional criteria as the basis for selection and admission into graduate counseling preparation programs (Bryant et al., 2013) to broaden access. One area of focus essential to program success was looking carefully at the needs of non-traditional and minority students.


Choy (2002) defined non-traditional students as students who either are enrolled part-time, are financially independent, have dependents other than a spouse or partner, or are single parents. In addition, we know that non-traditional students are likely to have delayed enrolling in higher education, work at least 35 hours a week, and be over the age of 25. These circumstances contribute to non-traditional students being much more career-decided than traditional students, and we find that these students are very disciplined, with non-traditional female students having higher grade point averages than their peers (Bushey-McNeil et al., 2014). However, we also know that these students have challenges. For example, non-traditional students are more likely to have a history of academic failures in their past, which may undermine confidence in their ability to succeed. They also have significant time constraints and family responsibilities (Grabowski et al., 2016). We know that although students in distance education succeed overall at a comparable rate to students in traditional residential institutions, students from underrepresented groups do not perform as well in distance education (Bushey-McNeil et al., 2014; Minichiello, 2016). While there have been some changes in the demographics of distance education students in higher education, with an increasing number of traditional student consumers of online education (Clinefelter & Aslanian, 2016), the majority of students in distance education counseling programs are still non-traditional students, precipitating the need for admissions policies that may not mirror traditional graduate admissions practices but allow for consideration of work and service activities in the process. The importance of understanding the demographics of distance counselor education students is in being responsive to their needs on a situational, institutional, and dispositional level.


Responding to the Needs of Distance Learning Students

Effectively engaging distance learning students and creating learning experiences responsive to their specific needs requires understanding that factors impacting success are situational, institutional, and dispositional (Bushey-McNeil et al., 2014). As mentioned earlier, at any one point in the student’s academic career, a non-traditional student can be a parent, a partner, an employee, a caregiver, or some other significant and time-consuming role, which constitutes a situational factor (Bushey-McNeil et al., 2014). These competing responsibilities have a significant impact on student success (Grabowski et al., 2016).


There also are institutional considerations that impact a student’s success. Institutional considerations include programmatic policies and practices, limited course offerings or offerings that are only available during the day, lack of childcare, and lack of financial assistance (Bushey-McNeil et al., 2014). Students in brick-and-mortar environments often feel that they are not receiving the support they need from their educational institution (Grabowski et al., 2016; Kampfe et al., 2006). The distance learning environment certainly makes managing childcare, work responsibilities, and inflexible schedules less of an obstacle in pursuit of higher education. Finally, there are dispositional concerns related to the limits that non-traditional students place on themselves based on their perceptions of their ability to succeed and their lack of self-confidence (Bushey-McNeil et al., 2014). Institutional and program leadership and program faculty must be sensitive to what these students bring to their educational experience and respond productively to these concerns by providing the kind of flexibility necessary to help them develop the skills and professional dispositions needed for professional practice. This support also requires programs to be alert to the skills needed to be successful in a distance learning environment, including and especially andragogical elements within the curriculum.


Andragogy and the Distance Learner

Students in distance learning programs need flexibility, hands-on laboratory experiences, in-depth orientation to technology, greater access to instructors, competency assessment and remediation designed to refresh skills and knowledge, and opportunities for self-reflection and support (Minichiello, 2016) in order to be successful. These needs are aligned with what we understand about the learning principle called andragogy, which is “the art and science of helping adults learn” (Teaching Excellence in Adult Literacy Center, 2011, p. 1). According to Knowles (1973), adults learn best in situations that allow them to apply information and problem-solving techniques to experiences and situations that are relevant to their own lives. In addition, adults look for opportunities to immediately apply newly acquired knowledge (Yarbrough, 2018).


Consistent with this need, instructors in an andragogical learning environment see the learner’s experience as valuable and are willing, in the process of acting as subject matter experts, to allow the learner to guide and customize the learning process (Palmer, 2007; Salazar-Márquez, 2017). Adult learning theory should be the foundation of the online learning experience, and the online learning environment should be reflective of a partnership between subject matter expert and facilitator and the adult learner, who is a personal life expert and leader in the learning experience (Clardy, 2005). The teacher as facilitator helps the learner apply the knowledge and skills to situations relevant to the learner’s experiences and evaluates the application of that newly acquired knowledge. In distance counselor education, faculty members also enact the roles of supervisor, mentor, and gatekeeper, which adds to the complex nature of orienting students to what these roles mean and how they are related to the teaching role. Faculty members also need to consider how they enact these roles throughout the learning process in meeting the needs of distance students.


In distance counselor education programs, program faculty and administrators have discovered that student success is rooted in providing students with support throughout the program, finding ways to engage them and giving them the opportunity to benefit from their faculty and peers’ experiences and expertise, getting them connected to university support services early, and, consistent with andragogical learning principles, identifying opportunities to affirm or support them in developing their own sense of self-efficacy and sense of agency (Clardy, 2005). There are significant opportunities to incorporate these elements in the selection, development, and retention activities of the program.


Selecting and Retaining Distance Learning Students

The goal of the entire educational process in counselor education centers on offering students experiences, education, and skill development that provide a firm orientation to the profession and the expectations of the counseling profession. Each step, from admissions to graduation and even alumni relationships, should be designed to inform students’ understanding of the profession. Although programs demonstrate flexibility in the way they meet professional standards with the admissions process, the processes must reflect professional standards like those described by CACREP (2015, Standards 1.K–L and 6.A.3–4). Supporting counselor education students at a distance begins with the selection or admissions process.


Admissions Policies

Historically, graduate admissions policies have focused on undergraduate grade point average, standardized test scores, personal interviews, and personal statements (Bryant et al., 2013). However, there are criticisms of these practices in that, although they are perceived to be race-neutral and objective, they do not account for the fact that there is differential access to quality pre-college education based on race and socioeconomic status (Park et al., 2019). Traditionally, low-income students and many students of color are denied access to the most prestigious graduate programs. Many online institutions, both public and private, are employing broad-access admissions practices for their online programs to increase access, opportunity, and fairness (Park et al., 2019).


A broad-access admissions policy differs from an open-access admissions policy. Open admissions typically means there are no requirements for admissions beyond having completed the requisite education before entering a program. Broad access generally means that requirements such as grade point average are designed to give potential students opportunity to participate in the experience, and consideration is given to factors other than academic performance. Broad access provides an opportunity for higher education to people who have been traditionally left out for a variety of reasons, such as the inability to access higher education or because less than stellar undergraduate performances have made it difficult for students to access graduate school. There are some variations to broad-access policies for many online institutions that have as their goal educating adult learners and increasing access and opportunity for people who have traditionally been excluded from higher education.


Although many online programs do not require standardized tests, such as the Graduate Record Examination or the Miller Analogies Test, and may have a lower undergraduate grade point average requirement than other institutions, a robust process for evaluating a candidate’s readiness for a graduate counseling program is essential. In addition to ensuring that the admission decisions are based on the applicant’s career goals, potential success in forming effective counseling relationships, and respect for cultural differences as described by the CACREP standards (CACREP, 2015, Section 1. L), programs also consider the candidate’s professional and community service as an indicator of their aptitude for graduate study. As important as it is to assess students’ readiness for graduate work through their previous academic performance and professional and service activities, programs also need to assess students’ digital readiness or competence for the tasks required in an online program (da Silva & Behar, 2017).


Developing Digital Competence

In the online education environment, it is imperative for students to either have or quickly develop digital competence. Digital competence is essentially the knowledge, attitudes, and skills required to effectively use the instructional technology found in a distance education environment. Students in the online environment have varying degrees of digital competence. Some students in the distance education environment are digital natives and others are digital immigrants (Salazar-Márquez, 2017). Digital natives are those who have always been a part of a highly technological world and are accustomed to accessing information quickly and easily. Their optimal functioning occurs when they are connected and receive immediate gratification. By contrast, those who are not disposed to technological mastery or have had little exposure to technology are digital immigrants and are forced to learn a new language and perpetually demonstrate this new language (as a second language), always speaking or behaving relative to their first language. For the digital immigrant, the requirements of navigating the course classroom and the university resources and creating assignments that require them to use technology can be very challenging.


Although digital natives can navigate the distance education environment with relative ease, they also can be very critical of the speed and efficiency of online systems. Digital immigrants, on the other hand, must navigate instructional content and the learning platform. As one might expect, it is much easier for a digital immigrant to communicate with a digital immigrant and a digital native to communicate with a digital native. But education is not homogenous, and there are both students and faculty who are natives and immigrants trying to partner with each other for an effective learning experience, which can pose a challenge in developing a productive learning community. Although digital immigrants can provide useful recommendations for improving technology and the learning platforms, we encourage program faculty and administration to focus on creating and maintaining systems that are universally beneficial and can be used easily for both natives and immigrants. If an assessment of digital competence is not part of the admissions process, it should be a part of the enrollment and on-boarding process to ensure that students know how to use technology required in the program, and should be an ongoing part of the educational experience.


Orientation to the Program

Critical parts of the admissions and retention processes for counselor education students include the full disclosure of what will be expected as students move through the program and the activities designed to make sure that students are fully aware of what they will be able to do with their degree after its completion. The Association for Graduate Enrollment Management Governing Board (2009) indicates that best practices for graduate enrollment management professionals include making sure that students understand the requirements of their degree program early. This is particularly important to students in distance education programs. Distance learning students, who are still largely non-traditional students, must be informed of program expectations early so that they can decide their ability to manage the different program requirements. For many distance education students, one of the greatest challenges is planning time away from work or family for the synchronous requirements such as group counseling laboratories, residency experiences, supervision, and field experiences.


Helping Students Plan. In addition to being informed of these requirements, administrators and faculty must make sure that students understand not just the requirements but also the relevance and timing of requirements. Non-traditional students need to understand how the timing of programmatic activities impacts their development and progression in the program. One of the best ways to retain students throughout a program is to encourage them to plan appropriately so that they can appropriately manage their personal responsibilities during the times they are engaged in experiences (e.g., field experience or residency) required for the academic program.


Providing Credentialing Information. Pre-admissions orientation also should include information about the credentialing process. It is quite common for students in distance counselor education programs to reside in different states with varying regulations regarding licensing and credentialing for practice. The pre-admissions process should include sharing as much information as possible about students’ opportunities to practice and their credentialing opportunities, but students also should be informed that the laws and requirements for licensure vary by state and can change during the time the student is enrolled in the academic program. Helping students invest in being responsible for monitoring licensure and credentialing laws in their state is essential. Finally, the program faculty and administration must ensure that students understand the expectations for student conduct and comportment throughout the program. Students must understand the evaluation process that will occur for specific program milestones. Throughout the program, the program should make information available about support that is designed for student success.


Faculty, Program Leaders, and University Administrators as Agents of Student Development

As with traditional brick-and-mortar counselor education programs, distance education programs are supported by two sets of institutional personnel. First, they are indirectly supported by a hierarchy of administrators, support staff, and program leadership, and secondly, students are directly supported by program faculty, who often become the primary, student-facing representatives, models, and mentors for both the institution and graduate programs. The challenge for distance counselor education programs becomes to lessen the impact of physical distance between faculty and students by facilitating meaningful, productive, and collaborative learning experiences for students with the use of distance technology as students matriculate through the curriculum, ensuring that students feel fully supported in the process (Benshoff & Gibbons, 2011; Carlisle et al., 2017; Lock & Johnson, 2016; Milman et al., 2015; Sibley & Whitaker, 2015; Suler, 2016; Whitty, 2017). Success in this endeavor requires that institutional administration, program leadership, and faculty create and sustain a shared vision of how to train and support students consistent with institutional values, accreditation standards, best practices, and professional credentialing and licensure board requirements, which support student success beyond the graduate degree. We have found that these reciprocal relationships are essential to the process of enacting such a shared vision and, ironically, call upon counselor educators to utilize their counseling and conceptual skills, emotional intelligence, interpersonal expertise, and advocacy to inform and persuade institutional stakeholders in how best to train and prepare master’s- and doctoral-level counselors and counselor educators. Essential to the process of building and sustaining a successful program is nurturing productive relationships with invested stakeholders, which is within the scope of professional preparation and the experience of counselor educators. Faculty and program leadership are well-advised to perceive themselves as program ambassadors not only to students and other external constituents (e.g., prospective students, colleagues outside of the institution, licensure boards, professional organizations, accrediting bodies, the public), but also to their internal constituents. (e.g., university and college administration, colleagues in related disciplines, other essential decision-makers).


As previously noted, numerous factors impact students’ ability to be successful in distance counseling programs, including personal factors related to work and family circumstances; personal history related to success in school and self-efficacy (Kampfe et al., 2006; Wantz et al., 2003); and programmatic factors related to timeliness and efficacy of student support, online course platforms and curriculum development, technological support, and faculty engagement (Wantz et al., 2003). Although educators cannot control or predict students’ personal circumstances, they can control what occurs within the program in how they respond to supporting students. The reciprocal relationships between institutional and program leadership and program faculty constitute a foundation upon which to build a successful program. We have introduced the importance of developing a shared vision between these groups and specifically wish to address both institutional and program leadership and program faculty responsibilities in three critical program areas, namely building a community of learners, faculty presence and engagement in and out of the classroom, and student retention and gatekeeping.


Building a Learning Community for Student Development

     Having a sense of community and belonging is essential to students’ success and retention (Berry, 2017). Many students in the online environment report feeling isolated (Berry, 2017) and are challenged to be resourceful, organized, and creative in ways they might not if they were enrolled in a traditional counselor education program. Time management, developing an intrinsic motivation to self-start, and strategically applying creativity in problem solving often become part of the skillset students develop out of necessity when working in a distance graduate program. These skills often manifest for students within their own version of cyberspace where they must rely upon themselves to persist in their graduate work. In order to combat the sense of isolation that contributes to student attrition, program faculty and administrators must work together to create a sense of community for students, which is largely accomplished using technology.


     The Role of Course Development, Technology, and Program Leadership in Building a Learning Community. Technology is the primary apparatus that supports distance learning, but like any tool, it needs to be utilized with purpose, intention, and careful planning. As Snow et al. (2018) noted, numerous commercial products have been developed to enhance student learning, including synchronous audio and video platforms (e.g., Zoom, Adobe Connect, Kaltura) and classroom platforms (e.g., Blackboard, Canvas, Udemy) designed to help provide a usable space to house and disseminate the curriculum and support student learning. The key to effective use of these platforms includes developing courses designed for online learning, supporting faculty in course development and maintenance, and using technology to connect with and support the student experience. Although institutional leadership is often enthused about the potential for online learning and the use of technology to support it, faculty reactions appear to be mixed (Kolowich, 2012), and not all counselor education faculty embrace distance education as a legitimate method for training counselors (Snow et al., 2018), even though they may teach in distance programs as both core and adjunct faculty.


Increasingly in distance counselor education programs, technology is utilized that allows for more digital synchronous interactions between students and their peers and faculty. To increase student engagement, the use of videoconferencing, webcasts, and telephone conferences are often helpful with the learning process (Higley, 2013). Recognizing that interaction and engagement between students and faculty is a significant contributor to student success, faculty and program leadership look for ways in which technology can enhance those opportunities throughout the programs. Students can upload practice videos, experience virtual simulations, and participate in synchronous practice experiences through videoconferences where they directly communicate with faculty and peers. Some universities also have dedicated virtual social spaces for students to connect with each other and engage on a personal level. But invariably, these spaces are underutilized after the beginning of an academic term. Students are beginning to create their own social media sites for community building, sharing their experience of specific courses and instructors and challenges with securing sites for field experience. Although tempting to do so, university officials must guard against the desire to micromanage these experiences in order to manage public perceptions regarding their programs. Much like the conversations that go on in study groups and campus student centers everywhere, students need spaces to share their sentiments about their experience and benefit from their peers’ experiences. Besides, many of the students on these sites are very quick to correct erroneous assumptions or combat negative comments with accounts of their own positive experiences. Additionally, unadulterated feedback can be useful for programs in identifying areas for improvement.


     Residential Laboratories. Over the years, there has been an evolution in the perception of counselor educators’ abilities to prepare counselors at a distance. As previously noted, once thought of as a suboptimal way to train counselors, distance learning is now being accepted and seen as legitimate (Snow et al., 2018). However, many distance counselor education programs have found that including a residential component to their primarily online programs positively impacts student success, student collaboration, engagement, and overall student satisfaction, as well as the strength of the learning community. In these residential laboratories, students practice skills in a synchronous environment where they get immediate feedback on their skill development and remediation if needed. They also work with peers without the constraints of those situational concerns referenced earlier, and they engage with their faculty and academic advisors. Students are able to connect with one another meaningfully and close the virtual distance by being able to interact with each other in person in real time. For distance learners, the opportunity to connect in person with a group of like-minded peers all striving for the same goal benefits them emotionally as well as academically. Most importantly, residential experiences allow faculty and program administrators to observe and conduct a more in-depth assessment of their students. These in-person residencies go a long way in building a sense of community for students (Snow et al., 2018).

     Faculty as Community-Building Facilitators in the Virtual Classroom. As the primary facilitator of the classroom learning experience, the faculty contributes to community building. Faculty community building starts with an internal assessment of personal and shared professional values that drive student connection and enhance learning. Palmer (2007) described faculty developing a subject-centered posture where both faculty and students become part of a community of learners committed to engaging in “a collective inquiry into the ‘great thing’ [subject of focus]” (p. 128), which serves as the basis for optimal student development. “We know reality only by being in community with it ourselves” (Palmer, 2007, p. 100), which challenges the notion of faculty being the only experts that disseminate knowledge. As noted previously, andragogy promotes the idea that faculty members have a wealth of professional knowledge that they may use to stimulate experiences that will impact students in their growth and that the faculty seek to stimulate what students already bring both in their professional and personal life experience. Palmer (2007) noted that “good education is always more process than product” (p. 96) and that learning is sometimes a disruptive process in which students may feel temporarily dissatisfied with ideas, concepts, and processes that are unfamiliar as they get their values and biases bumped into. The job of faculty becomes being vigilant and recognizing opportunities to describe the experience through developing a balance between support and challenge that invites students to apply what they learn to their emerging professional and personal selves. Developing this kind of learning community means that faculty members must be willing to be vulnerable in the learning process just as their students are. They should resist seeing students solely as customers in their programs instead of as potential colleagues in the counseling profession. A careful examination of what counselor educators and supervisors do and the shared values that drive professional identity is essential in developing this kind of community of learners (Coppock, 2012). For faculty, this approach parallels the goal of developing cultural humility, which is a highly sought learning outcome for students (Fisher-Borne et al., 2015; Shaw, 2016).


Faculty members need to consider how they will personalize the virtual classroom and what areas they want to emphasize for their students. For example, forums dedicated to building connections through using photographs or small video introductions can enhance the classroom as a safe environment for students to interact. Making these introductions fun and engaging can go a long way to helping decrease the distance students may experience. Depending on the flexibility of the program for faculty to modify the classroom according to their preferences, faculty can create spaces for students to share their ideas and thoughts freely and help students discover how their ideas compare to those of their peers. Students often attempt to make only minimal and requisite connections between their ideas and their peers, but faculty can encourage a more meaningful discourse in which students’ expressed ideas are essential through modeling this themselves.


Additionally, faculty members aid students in becoming responsible community members in the classroom and professional community. The faculty models openness and acceptance of the personhood and individual perspectives of each student by offering encouraging responses that support their perspectives and challenge them to consider other points of view. By immediately attending to students’ expressions of thoughts and ideas that may be counterintuitive to the ACA ethical code or that might alienate other community members, faculty members facilitate a community where all students feel safe and included. Learning how to become professionals in a virtual community becomes an additional skillset that students develop as they engage in distance learning. This direct modeling has powerful implications for the kinds of relationships students establish with colleagues and clients within work settings they will engage in during their practicum and internship experiences.


Faculty Presence and Engagement as Conduits for Student Development

     It is indisputable that faculty engagement with students in distance counselor education is essential. Students rely on faculty to provide clear steps in a process that requires self-motivation, resourcefulness, creativity, and persistence. An important part of building a productive learning community and promoting the culture of distance learning is helping students not only to engage in the subject (i.e., assignments, learning resources, readings, projects), but also to engage each other in order to maintain the relational quality of face-to-face interactions. We encourage faculty and program leadership to see students as individuals, to foster essential relationships, and to operationalize their caring for students in all their activities (Hall et al., 2010). As Hall et al. (2010) have noted, these activities require that those involved in preparing counselors at a distance remain focused and intentional about what they do when enacting their shared vision.


     The Role of Institutional and Program Leadership in Faculty Engagement. The development and maintenance of online curriculum is central to student development, and careful planning, typically within a curriculum committee, helps maintain a vibrant and responsive curriculum (Brewer & Movahedazarhouligh, 2018). Course development for a distance education program, although vital, can be intimidating to faculty unfamiliar with the process who can have reservations about the efficacy of distance learning and their own ability in using technology to accomplish course goals. Sibley and Whitaker (2015) noted that faculty resistance needs to be responded to by institutional administration and program leadership with understanding and support. Wantz et al. (2003) assessed program leadership and faculty perceptions of online learning and discovered that faculty perceptions included concerns about the efficacy of online distance education, the belief that certain subject areas (i.e., practice and application of counseling skills, ability to accurately assess student mastery) might not be appropriate for a distance model, the cost–benefit balance and exertion of time and effort in creating and maintaining an online course, and the need to be compensated for this time and effort. Although this study is over 15 years old, it does give an important touchpoint concerning the perspectives of some faculty who work within residential and online programs.


For programs that rely heavily on faculty to create online curriculum, institutional and program leadership and administration will need to carefully review compensation policies and practices in programs that require faculty to integrate course development into their workload. Snow et al. (2018) verified that some faculty exhibit resistance toward distance learning, specifically faculty who themselves are teaching online courses either as adjuncts for online programs or who are being required to teach online courses as part of their full-time positions. Sibley and Whitaker (2015) noted that “since faculty participation can neither be mandated nor fabricated, institutions must make online learning attractive, accessible, and valuable to faculty” (para. 23). This starts with online instructional development teams cultivating a deep sense of respect for the expertise the counselor education faculty members possess and working to establish consultative relationships when developing the online curriculum, including helping faculty see what has been done successfully in other courses. Hall et al. (2010) described a philosophy of approaching distance learning from a humanistic framework: “The challenge was not to allow technology to limit or destroy the essence of the individuals involved in the learning process” (pp. 46–47), but for faculty to maintain the relationality with their students consistent with shared professional values that acknowledge counselor preparation as a high-touch (i.e., interpersonal, mentoring, supervising) endeavor. An important part of the successful deployment and maintenance of distance counselor education programs is in continually nurturing a values-based approach; soliciting buy-in from essential stakeholders; seeing and using technology as a tool and not a barrier to enhance connection and learning; and supporting the development of the curriculum, including scheduled revisions based on systematically collected assessment data (CACREP, 2015).


Understanding how to develop curriculum for counselor preparation programs is an essential point where online instructional development and program faculty meet. For example, according to media richness theory (Whitty, 2017), media-rich learning environments lend themselves best to subject areas that are “more ambiguous and open to interpretation” (p. 94) rather than topics that are clear and unambiguous, such as mathematical or scientific formulas. Media-rich learning is characterized by the following four criteria: the capacity for immediate feedback (i.e., clarity of the material), the capacity to transmit multiple cues (i.e., the ability to develop clear and meaningful consensus), language variety (i.e., being able to convey context to complex concepts and ideas), and the capacity of the medium to have a personal focus (i.e., making the learning personal and relevant to the perspectives and needs of the learner). Sibley and Whitaker (2015) point out that some faculty may see technology (including media) as a barrier between them and students rather than a tool to facilitate increased insight, conceptual understanding, and skill mastery, so supporting faculty in experimenting and adopting ways of interacting with technology is a logical starting place. Institutional and program leadership can help faculty become familiar with and invested in learning platforms through initial and ongoing training. Leadership also can help support faculty directly by determining what parts of the classroom can be personalized and modified (including learning activities and assignments) and which parts must remain constant for accreditation standards and learning outcomes assessment.


Additionally, institutional and program leadership are well-advised to develop processes that can monitor faculty activity within the virtual classroom that will reinforce expectations of what faculty should do weekly in the classroom (e.g., faculty must check into the classroom a minimum of four days per week, respond to 75% of student postings with substantive responses in the discussion forum, must review and grade assignments within 7 days, and must respond to student inquiries within 48 hours of receiving them) without coming across as micromanaging and punitive. Leadership may certainly achieve compliance, but they cannot demand engagement, which is based on the discretionary time, attention, effort, and energy faculty devotes to the learning endeavor based on their deeply held values and commitment to the shared vision they have for educating students.


We recommend that leadership strive for transparency in how monitoring of classroom activity is accomplished, its intent, and the use of assessment data. Without transparency, leadership takes on the risk of stoking faculty concerns about negative evaluations and ultimately the security of employment. Establishing peer monitoring through periodic course audits within a collegial, developmental, and supportive approach that is non-threatening to faculty will go a long way to sustaining faculty engagement in the classroom. Some larger distance education programs assign course stewards (i.e., a faculty member responsible for a particular course in the curriculum) who act as the first line of contact for faculty who may have questions about aspects of the course or particular assignments, or who might struggle with a student issue, and can support faculty directly through informal peer mentoring. This becomes especially important for adjunct faculty who need assistance in contextualizing the course into the larger program objectives and feeling invested in the success of program students. These kinds of structures and processes will be helpful if institutional and program leadership is committed to communicating regularly with faculty and promoting an environment of support and accountability.


Finally, institutional and program leadership can encourage a culture of openness to peer review and classroom observation that will help faculty improve their techniques and in a way that is non-threatening (Palmer, 2007). Developing and scheduling events and activities that foster professional renewal and connection between faculty can help strengthen the value of reflective practice in teaching that is essential throughout a faculty member’s career. Palmer (2007) writes the following about the tendency for faculty to remain “private” about their work in the classroom:


Involvement in a community of [andragogical] discourse is more than a voluntary option for individuals who seek support and opportunities for growth. It is a professional obligation that educational institutions should expect of those who teach—for the privatization of teaching not only keeps individuals from growing in their craft, but fosters institutional incompetence as well. By privatizing teaching, we make it hard for educational institutions to become more adept at fulfilling their mission. (p. 148)


Being able to see one’s teaching style, approach, and interactions through a colleague’s eyes can help faculty make appropriate adjustments and strengthen reflective practice, which is ironically what faculty expect from their students in a distance counseling program. This can model a culture of openness for the entire learning community.


     Faculty Role in Student Engagement. We believe that faculty engagement with students and facilitating meaningful engagement of the subject matter in the classroom lies at the heart of student success, both within the program and in establishing a foundation for lifelong learning. Diminishing the distance in a distance counselor education program means that faculty members are eager to connect meaningfully with students, be open to their feedback about what is or is not working for them in the classroom, and take the time and effort to supply a rationale for particular assignments and activities, which includes how these learning experiences are relevant to professional growth. The value faculty offers is largely in their ability to make the curriculum come alive and to engage the student in seeing the subject matter differently than they might assume. This means that faculty members are challenged to use their time and effort strategically in developing therapeutic stories, analogies, and insights that can be utilized for a variety of professional circumstances, clinical situations, cultural encounters, and ethical dilemmas. Recognizing effort and validating students’ points of view, including being sensitive to the various personal contexts, shaped by life experience, that students bring to their learning, is essential in nurturing faculty–student relationships. In their theory on group development, Bennis and Shepard (1956) held that group members, prior to engaging in productive, emotionally intimate, affirming interactions with peers, first make decisions about the authority in the room, including accepting how the leader models engagement and psychological safety. It is not inconceivable that this similar dynamic occurs within the virtual classroom as students encounter the faculty leader and make decisions about how to approach the classroom, including using their experience as a springboard into how to behave and what to expect. Student engagement in the classroom is enhanced in three specific areas of faculty engagement: timely, relevant, consistent, and targeted feedback; substantive and relevant responses in discussion forums; and prompt and direct follow-up when necessary with students.


     Timely, Relevant, Consistent, and Targeted Feedback. Feedback is the life blood of student development in a counselor preparation program, and students depend on faculty to provide affirming and corrective feedback on numerous levels that is proportional to learning activities and assignments. Proportionality is demonstrated when the faculty aligns feedback with what is most important within the goals and objectives of a course. For example, a common complaint of graduate program adult learners is that faculty members may sometimes become so overly concerned about student adherence to the American Psychological Association (APA) publication style manual that they minimize the content, concepts, insights, and ideas students attempt to convey in their raw and imperfect form. When students encounter this kind of disproportionate feedback, they learn what the faculty member most values and work to meet the implicit expectations, sometimes to the detriment of learning other and perhaps more important concepts related to the subject matter. When this occurs, students may subjugate all other considerations and simply seek to pass the course, while sacrificing learning and a love for the subject matter. The impression also might inadvertently be conveyed that authority ultimately rules which can reenact the wounds of past academic failures in students who do not view themselves as high performing.


Timely, relevant, consistent, and targeted feedback occurs when faculty members recognize and validate the effort students put into their work; respectfully describe what they see working well within student product and performance; provide a developmentally sensitive critique of the identified concern, while being careful not to overwhelm the student with a list of deficits; and offer respectful, corrective alternatives and offer to meet with the student to clarify anything that might be confusing. Timeliness is best achieved by staying on top of grading and meeting the established time parameters of when assignments will be evaluated and grades returned to the student. Feedback related to counseling or conceptual skills performance (such as in field experience) also includes faculty providing sample language that might be used in demonstrating the particular skill work that can help stimulate students in finding their own voices in how to communicate a particular thing to their clients.


     Substantive and Relevant Responses in Discussion Forums. Discussion forums are often the most lively and engaging areas in a virtual classroom and where, often in distance counselor education, a significant part of the virtual teaching and learning takes place. Here students engage in articulating their insights and understanding of the subject matter and engage one another and faculty in respectful and honest interaction. Students can perceive online discussions as less threatening, particularly when verbalizing sensitive material, including values-driven points of view (Ancis, 1998), which often emerge in coursework such as ethics, social and cultural foundations, group counseling, and field experience courses. On the other hand, some students, because they perceive themselves as not being physically seen or heard, might engage in the online disinhibition effect (Suler, 2016), wherein they can say things that are controversial or disrespectful based on the belief that being anonymous is the same as being undetectable. Or they may make comments that would be irresponsible in professional communications, which would obviously need to be corrected. Often these discussions are asynchronous, and students have the benefit of being able to clearly think about the subject matter, read, observe, and comprehend the learning resources (e.g., course readings and media), and prepare responses to discussion prompts to meet the requirements of the weekly assignment. Because students develop a routine within the classroom, they have been reinforced in how to respond, including deciding how much time and effort they will expend in developing their responses. In situations where students may simply default to becoming formulaic in their responses, faculty members can help students engage with the material more meaningfully through formative and summative feedback. A much more powerful way to help students engage in the discussion forum is for faculty to model what engaged responses look like and to encourage and invite students to engage more fully in their learning.


Faculty can engage creatively in the discussion forums by embedding YouTube videos, sharing links to TED talks, sharing important and relevant websites, and occasionally sharing humorous memes to help counter the effects of formulaic, routine, and mundane participation. Students can be encouraged to post a short video describing their reactions as a way of lessening the virtual distance and reminding class members of what each other looks like. Often, synchronous meetings occur through interactive video platforms where students are able to hear and see and be heard and seen by others, so encouraging connections with and between students within these learning opportunities can help prepare students to engage with the subject matter more meaningfully (Benshoff & Gibbons, 2011).


A primary benefit of online discussions is that the discussion can also be preserved in an organized fashion for retrieval by students and faculty members (the discussions can be copied and pasted and stored electronically), thus chronicling and capturing the essence of the discussion, reinforcing what students said to their peers (the expression of their own perspectives), highlighting specific and targeted feedback related to the particular topic, and preserving essential references that might be useful for follow-up. Faculty can indirectly assess the efficacy of their responses to determine the degree to which their contributions are adding value or are simply facilitative in getting students to engage in the discussions with each other. This can include the instructor copying and pasting verbatim “chat” in the chat functions of live, synchronous video interactions where students can share insights, suggestions, websites, and other resources for student follow-up and review.


     Prompt and Direct Follow-Up with Students. Perhaps the most effective and often time-consuming manifestation of faculty engagement is following up with students with live chats, phone calls, video interactions (e.g., Zoom, Skype, Adobe Connect technology), or face-to-face in real time for a variety of reasons. Often, students get the message from faculty, “If you need me, please reach out to me,” which translates to email interactions to address logistic concerns in the classroom. Students assume that because they need to be resourceful and proactive in their distance program, they will need to take care of themselves, by themselves, without seeking faculty interaction or intervention. Faculty advising and mentoring in residential programs appears clear cut; a student can drop into a faculty member’s office and address a concern or have a chat about professional or personal matters. This function may be more nebulous in a distance education environment unless the faculty makes explicit how they will follow up with students and interact with them personally. Faculty can address questions or concerns and also engage students in important advising regarding professional, ethical, academic, credentialing, and licensure issues; consult about clients they may encounter (if students are in their field experience); and have dedicated focused consultation on these important matters. Helping students feel valued means that faculty give uninterrupted time and resist multitasking, which can sometimes become a default for people who are part of a distance learning community. Faculty can engage students in skills practice and can record these practice sessions for students to retrieve and review as needed. Skills practice and mastery in distance counselor education has been identified as a central function for faculty in their work with students (Fominykh et al., 2018; Shafer et al., 2004; Trepal et al., 2007) and has been identified in helping strengthen self-efficacy beliefs in students (Watson, 2012). Faculty can initiate a student outreach in cases where they might feel concern over a student’s performance or change in classroom behavior. In these ways, the faculty lessens the distance, hold students closer to areas of support, and reassures students that they are practically cared for in their graduate work.


Student Retention and Gatekeeping

     Student retention and gatekeeping functions are foundational to ensuring a broad access policy and maintaining quality control of program graduates. Students who struggle with academic and personal concerns need to have direct support from program faculty and administration in times in which they feel most challenged (Kampfe et al., 2006). Counselor educators and supervisors are ethically charged as gatekeepers for the counseling profession (ACA, 2014; Bryant et al., 2013; Dougherty et al., 2015; Dufrene & Henderson, 2009; Gaubatz & Vera, 2002; Homrich et al., 2014) and the implementation of gatekeeping is systemic and dependent on institutional and program leadership and program faculty to execute successfully. Leadership and faculty have separate but related functions in successful gatekeeping and in student retention.


     The identification of students who struggle will almost always be within the oversight of individual faculty members. As noted previously, students can enter a distance counselor education program with academic challenges and with multiple and competing priorities as they balance family, work, and school responsibilities. CACREP (2015) requires that programs make students aware of counseling services available to them in cases where therapeutic help is warranted. Library services, writing center services, student support services, tutoring and mentoring, and disability services are often utilized to help students succeed in their academic pursuits. Academic leadership is charged with developing and maintaining systems, processes, and protocols that are activated when a student needs help and faculty members are essential in helping students access these services when needed. Faculty engagement is intricately tied to the successful utilization of these services, as students will see faculty as their “go-to” person to help sort through tricky issues and develop an action plan. Clear, two-way communication between faculty and academic leadership can assist in refining these processes and services.


     Faculty Roles in Student Retention and Gatekeeping. Students in distress will often revert to actions that are driven by stress and anxiety rather than what is in their best interests, including moving away from those who can help them sort through challenging situations. As noted previously, faculty engagement helps students feel confident that the faculty cares about them not just as students, but as people. Caring and compassion is operationalized when faculty members are proactive in contacting students when there is a change in classroom performance and available when students reach out for assistance. Although it is tempting in a distance counselor education program to refer students to a particular service or give a phone number or a website address, we have found that students sometimes interpret such a referral as “passing the buck” and feel frustrated as this patented answer can be experienced as the typical response in other interactions with the university and program. Meeting students where they are in this context means that the faculty is well-enough aware of the services available that they can talk through the process of what a support contact would look like and what students might expect. This is an important part of developing productive relationships with internal constituents and nurturing contacts within the institution that will help expedite assistance when needed. In this way, faculty credibility is strengthened, and students feel cared for at times when it matters most.


Gatekeeping is a process typically enacted by faculty when there is a concern in student behavior and can be assessed at different points within students’ progress through their respective programs. Because of the highly personal nature of gatekeeping (i.e., identifying concerns and counseling with a student about his or her personal or professional behavior, values, ethics, and attitudes), some faculty may be reluctant to initiate conversations directly with students and might need additional supports from faculty, teams, or committees specially designated to address these student concerns. As previously noted, faculty members need to assess their own professional and personal values in making decisions about how they will engage students in difficult and courageous conversations regarding their professional development. Also, because of the nature of gatekeeping, the faculty is well-advised to document these student conversations in a follow-up email to the student, copied to other appropriate support people to ensure that problem identification, response, and associated actions are clear with identifiable timelines. This will help create the basis for a specific and targeted remediation plan (Dufrene & Henderson, 2009). Just as all students are individuals with specific contexts, all gatekeeping issues are not created equal. Students can present with skill deficits that require remediation in skills work where it is appropriate to assign them to a skills mentor who would help them work through skills challenges. The skills mentor would likely make reports to the gatekeeping committee regarding progress and additional supports if warranted. Students also can present with dispositional concerns that require a different response and intervention. Homrich et al. (2014) developed standards of conduct expected of counselor trainees throughout their programs that can act as an important foundation for developing dispositional standards that can be disseminated to students in orientation meetings and used periodically throughout key assessment points where dispositional concerns might be present.


It is inaccurate to assume that while some graduate counseling students are already professionals within a mental health setting (e.g., case manager, psychiatric technicians, intake representative), they know how to conduct themselves professionally and what constitutes professional behavior (Dougherty et al., 2015; Homrich et al., 2014). Faculty members who are proactive in modeling and talking explicitly about professionalism can influence students to consider their own behavior and make needed adjustments to be more in line with shared professional values and help them become more reflective in their practice (Rosin, 2015), strengthen their resiliency (Osborn, 2004), and develop effective reflective responding skills (Dollarhide et al., 2012). Faculty modeling of professional dispositions, reflective practice, and self-care will help normalize the commitment to the shared values of the profession and mentor students who may struggle to adopt and adjust to the demands of a profession that relies on professionals to commit and practice ethical values.


     Institutional Support for Gatekeeping. The relationships with chief legal counsel and the dean of students are important to program administrators and faculty being able to effectively execute their role as gatekeepers to the counseling profession. Although program leadership makes the decisions about the evaluation process for students—the remediation plans and dismissal recommendations that relate to comportment, academics, and skill development—the decisions to dismiss are usually done in consultation with colleagues from the dean of students’ office and chief legal counsel.


    Deans of Students as Gatekeeping Partners. In an era of increased litigiousness, students increasingly appeal the decisions of program leadership, often to the dean of students (Johnson, 2012). It is the role of the dean of students to support the overall mission of the university and enforce the roles of the institution, but this also is the person responsible for building community and being concerned about the emotional and physical welfare of students. Counselor educators work closely with the dean of students when students have violated university or program policy and when they are trying to identify the appropriate ways to respond to conduct and comportment concerns. The relationship between the program faculty and administrators and the dean of students is critical to ensuring that appropriate interventions are put in place to protect the individual student, the greater student body, the community, and the profession.


Chief Legal Counsel as Gatekeeping Partners. Equally important is the relationship between chief legal counsel and the program faculty and administration. The role of the general legal counsel in any organization is to “oversee the legal and compliance function” (McArdle, 2012, para. 2) of the organization. In higher education, it means that counsel also is providing oversight to internal compliance with university policies and making sure that the scope of those policies is not too broadly interpreted. This is very much a risk management role in some settings (McArdle, 2012). University lawyers advise us on the interpretation and the applicability of legal documents such as policy manuals, contracts, and articulation agreements. They also participate in significant dispute mediations and formal dispute resolution (Meloy, 2014).


Counselor educators are mandated to dismiss students who are deemed unfit for the profession and students for whom it is determined that their issues of concern cannot be remediated to the degree that they will be able to provide competent services to diverse clients (ACA, 2014). In addition, counselor educators are required by the 2014 ACA Code of Ethics to participate in ongoing evaluation of those they supervise and to provide remediation when needed (ACA, 2014). But the code also requires program leaders to dismiss from the training programs those who are unable to provide competent service. CACREP standards require that program faculty and administrators have a developmental and systematic assessment process. Administrators should work with legal counsel to ensure that no comportment dismissal is viewed as malicious or punitive. General counsel helps stakeholders ensure that a student’s rights have been protected in the process and that the dismissal process is a fair one. The challenge is to protect the university, the student, and the public (McAdams et al., 2007).


Counselor educators should receive guidance on institutional policy prior to implementation. There can be frustration on the part of counseling faculty and administrators that general counsel does not support their goals or their professional requirements. However, some of this frustration can be avoided if programs provide general counsel and other administrators with a profile of their responsibilities to the profession and the community with their training programs. It is important for counselor education administration and faculty to develop a relationship with general counsel early based on mutual alliance. Although the administration is not obligated to take the advice of general counsel in how they respond to a student situation, it is advisable to consider their guidance very carefully.   


Building and Sustaining Credibility Within the University Culture

Most of the discussion around student selection, development, and retention has been focused on students, faculty, and the program. However, a program’s reputation and role in the institutional mission and the program administrators’ ability to communicate the value proposition of the program are critical contributors to selection, development, and retention. A full exploration of this idea is beyond the scope of this article, so these ideas will only be discussed briefly, with a charge to counselor educators, especially administrators of programs, to work together to ensure that preparation programs are able to demonstrate innovation, flexibility, and responsiveness so that the institutional and community value of these programs is clear and so that programs are able to secure sufficient resources to effectively educate, evaluate, and develop students.


One of the greatest challenges program administrators face in higher education is competing for limited resources (Pucciarelli & Kaplan, 2016). In addition, program administrators are continually challenged to demonstrate the relevance of their programs. As program administrators plan for the sustainability of their future, they must examine the changing needs of the profession to which they are responsible, the mission of the institution, the program mission, the preparation and needs of their students, the needs of the community they are serving, the availability of resources, the regulatory environment impacting professional practice, and the needs of the faculty and administrators providing oversight to the program. Considering the needs of many constituents is a very challenging proposition, but it is one made easier when there are clear guiding principles and philosophies or mission and vision for the program. Although not static, the mission and vision communicate the program’s aspirations and intentions to everyone. They also serve to give a program a clear identity in the university community. Using the mission and vision of the program as a reference point serves to inform all decision-making, particularly those decisions that relate to how learners in a program should be educated and which resources are a priority.


Managing the Student–Program Relationship

The changing dynamics of the student–program relationship do not rest entirely with student attitudes. Many of our university operations and recruitment strategies, designed to achieve student enrollment targets to attract the numbers and kinds of students the institutions desire, closely resemble strategies used in business (Hanover Research, 2014). Online programs have been particularly inclined to employ creative marketing strategies in order to convince potential learners to shift their paradigm from brick-and-mortar institutions as the only source of higher education to online institutions (, 2013). The unintended consequence is that this approach often fosters a customer–business relationship that can, at times, be counterproductive to the student–faculty/supervisee–supervisor relationship. In the face of critical evaluations of their professional comportment and skill development, students will oftentimes interject commentary about the price of the degree and their expectations that they will complete their academic programs primarily because of the money invested in that education.


We have found that what sometimes exacerbates this dynamic is a racially charged climate, and many students, especially students who are traditionally marginalized, are suspicious of faculty members’ motives for identifying student development needs. This is a challenge for online programs where, for much of their academic program, students only have a one-dimensional (i.e., faculty member’s written word) understanding of their faculty and administrators. Finally, because of this largely one-dimensional perception, it is more challenging to develop relationships with these students. Focusing on the relationship with students and being relationally oriented is essential. Faculty and administrators, in their efforts to attract, develop, and retain students, should be focused on relationship building at every opportunity, thereby creating an academic environment where students are clear about the expectations of the academic and professional practice community and understand the range of consequences for behavior that is outside those expectations.




Distance counselor education programs and counselor educators pay as much attention to students’ selection, development, and retention as traditional programs, often within a context of general skepticism about the ability to adequately train counseling students at a distance. However, as distance counselor educators, we are committed to educating counselors and counselor educators in this arena because of our commitment to access and opportunity for students and the communities they serve. We believe in all the essential ways that online education is the true equalizer for non-traditional and traditionally marginalized students, and broad-access admissions policies provide us with a vehicle to increase access. Being successful in this arena requires a commitment from program faculty, program administrators, and other university administrators. It also requires us to understand the needs of the online student population and commit to systematic ways of developing the adult learner while acknowledging and employing the individual student’s experiences as assets to the developmental process. Although we may employ technology to a greater degree than our colleagues in traditional education settings, we put the professional standards of quality and ethical practice, community and relationship building, and student academic and skill development as the foundation for all activities related to selection, development, and retention.


Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.





American Counseling Association. (2014). 2014 ACA code of ethics.

Ancis, J. R. (1998). Cultural competency training at a distance: Challenges and strategies. Journal of Counseling & Development, 76, 134–143.

The Association for Graduate Enrollment Management Governing Board. (2009). Best practices for graduate enrollment management professionals.

Bennis, W. G., & Shepard, H. A. (1956). A theory of group development. Human Relations, 9, 415–437.

Benshoff, J. M., & Gibbons, M. M. (2011). Bringing life to e-learning: Incorporating a synchronous approach to online teaching in counselor education. The Professional Counselor, 1, 21–28.

Berry, S. (2017). Building community in online doctoral classrooms: Instructor practices that support community. Online Learning, 21(2), 42–63.

Brewer, R., & Movahedazarhouligh, S. (2018). Successful stories and conflicts: A literature review on the effectiveness of flipped learning in higher education. Journal of Computer Assisted Learning, 1–8.

Bryant, J. K., Druyos, M., & Strabavy, D. (2013). Gatekeeping in counselor education programs: An examination of current trends. Ideas and Research You Can Use: VISTAS 2013.

Bushey-McNeil, J., Ohland, M. W., & Long, R. A. (2014, June 15–18). Nontraditional student access and success in engineering (Paper ID #9164) [Paper presentation]. 121st ASEE Annual Conference & Exposition, Indianapolis, IN, United States.

Carlisle, R. M., Hays, D. G., Pribesh, S. L., & Wood, C. T. (2017). Educational technology and distance supervision in counselor education. Counselor Education and Supervision, 56, 33–49.

Carlsen, A., Holmberg, C., Neghina, C., & Owusu-Boampong, A. (2016). Closing the gap: Opportunities for distance education to benefit adult learners in higher education. UNESCO Institute for Lifelong Learning.

Choy, S. (2002). Nontraditional undergraduates. U.S. Department of Education, National Center for Education Statistics.

Clardy, A. (2005). Andragogy: Adult learning and education at its best? [Unpublished manuscript]. Towson University, Towson, MD.

Clinefelter, D. L., & Aslanian, C. B. (2016). Online college students 2016: Comprehensive data on demands and preferences. The Learning House, Inc.

Coppock, T. E. (2012, March 1). A closer look at developing counselor identity. Counseling Today. Alexandria, VA: American Counseling Association.

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards.

da Silva, K. K. A., & Behar, P. A. (2017). Digital competence model of distance learning students.           Proceedings of the IADIS International Conference on Cognition & Exploratory Learning in the        Digital Age, 109–116.

Dollarhide, C. T., Shavers, M. C., Baker, C. A., Dagg, D. R., & Taylor, D. T. (2012). Conditions that create therapeutic connection: A phenomenological study. Counseling and Values, 57, 147–161.

Dougherty, A. E., Haddock, L. S., & Coker, J. K. (2015). Student development and remediation processes for counselors in training in a virtual environment. Ideas and Research You Can Use: VISTAS 2015.

Dufrene, R. L., & Henderson, K. L. (2009). A framework for remediation plans for counseling trainees. In G. R. Walz, J. C. Bleuer, & R. K. Yep (Eds.), Compelling counseling interventions: VISTAS 2009 (pp. 149–159). American Counseling Association.

Fisher-Borne, M., Cain, J. M., & Martin, S. L. (2015). From mastery to accountability: Cultural humility as an alternative to cultural competence. Social Work Education, 34, 165–181.

Fominykh, M., Leong, P., & Cartwright, B. (2018). Role-playing and experiential learning in a

professional counseling distance course. Journal of Interactive Learning Research, 29, 169–188.

Gaubatz, M. D., & Vera, E. M. (2002). Do formalized gatekeeping procedures increase programs’ follow-up with deficient trainees? Counselor Education and Supervision, 41, 294–305.

Grabowski, C., Rush, M., Ragen, K., Fayard, V., & Watkins-Lewis, K. (2016). Today’s non-traditional student: Challenges to academic success and degree completion. Inquiries Journal, 8(3), 1–2.

Hall, B. S., Nielsen, R. C., Nelson, J. R., & Buchholz, C. E. (2010). A humanistic framework for distance education. The Journal of Humanistic Counseling, Education and Development, 49, 45–57.

Hanover Research. (2014, March). Trends in higher education marketing, recruitment, and technology. Hanover Research Academy Administration and Practice.

Higley, M. (2013, October 15). Benefits of synchronous and asynchronous e-learning. eLearning Industry.

Homrich, A. M., DeLorenzi, L. D., Bloom, Z. D., & Godbee, B. (2014). Making the case for standards of conduct in clinical training. Counselor Education and Supervision, 53, 126–144.

Johnson, B. (2012). Being the dean of students in challenging times. Independent School, 71(4), 76–81.

Kampfe, C. M., Smith, S. M., Manyibe, E. O., Moore, S. F., Sales, A. P., & McAllan, L. (2006). Stressors experienced by interns enrolled in a master’s rehabilitation counselor program using a distance education model. Rehabilitation Education, 20, 201–212.

Knowles, M. (1973). The adult learner: A neglected species (ED084368). ERIC.

Kolowich, S. (2012, June 21). Conflicted: Faculty and online education, 2012. Inside Higher Ed.

Lehfeldt, E. A. (2018, October 3). What is your philosophy of higher education? Inside Higher Ed.

Lock, J., & Johnson, C. (2016). From assumptions to practice: Creating and supporting robust online collaborative learning. International Journal on E-Learning, 16, 47–66.

McAdams, C. R., III, Foster, V. A., & Ward, T. J. (2007). Remediation and dismissal policies in         counselor education: Lessons learned from a challenge in federal court. Counselor Education and   Supervision, 46, 212–229.

McArdle, E. (2012, July 1). In the driver’s seat: The changing role of the general counsel. Harvard Law Bulletin.

Meloy, A. (2014). Using your general counsel effectively. The Presidency, 17(2), 23–24.

Milman, N. B., Posey, L., Pintz, C., Wright, K., & Zhou, P. (2015). Online master’s students’ perceptions of institutional supports and resources: Initial survey results. Online Learning, 19(4), 45–66.

Minichiello, A. L. (2016). Towards alternative pathways: Nontraditional student success in a distance-delivered, undergraduate engineering transfer program [Doctoral dissertation, Utah State University]. Digital Commons @USU. (2013, February 11). Higher ed marketing secrets: The ingenious business of recruiting online students.

Ortagus, J. C. (2017). From the periphery to prominence: An examination of the changing profile of online students in American higher education. The Internet and Higher Education, 32, 47–57.

Osborn, C. J. (2004). Seven salutary suggestions for counselor stamina. Journal of Counseling &    Development, 82, 319–328.

Palmer, P. J. (2007). The courage to teach: Exploring the inner landscape of a teacher’s life. Jossey-Bass.

Park, J. J., Yano, C. R., & Foley, N. F. (2019, March 27). What makes a fair college admissions process? JSTOR Daily.

Pucciarelli, F., & Kaplan, A. (2016). Competition and strategy in higher education: Managing complexity and uncertainty. Business Horizons, 59, 311–320.

Rosin, J. (2015). The necessity of counselor individuation for fostering reflective practice. Journal of Counseling & Development, 93, 88–95.

Salazar-Márquez, R. (2017). Digital immigrants in distance education. International Review of Research in Open and Distributed Learning, 18, 231–242.

Shafer, M. S., Rhode, R., & Chong, J. (2004). Using distance education to promote the transfer of motivational interviewing skills among behavioral health professionals. Journal of Substance Abuse Treatment, 26, 141–148.

Shaw, S. (2016, December 27). Practicing cultural humility. Counseling Today. American Counseling Association.

Sibley, K., & Whitaker, R. (2015, March 16). Engaging faculty in online education. Educause Review.

Smith, D. F. (2014, May 22). Who is the average online college student? [Infographic]. EdTech: Focus on Higher Education.

Snow, W. H., Lamar, M. R., Hinkle, J. S., & Speciale, M. (2018). Current practices in online counselor education. The Professional Counselor, 8, 131–145.

Suler, J. R. (2016). Psychology of the digital age: Humans become electric. Cambridge University Press.

Teaching Excellence in Adult Literacy Center. (2011). TEAL Center Fact Sheet No. 11: Adult Learning Theories.

Trepal, H., Haberstroh, S., Duffey, T., & Evans, M. (2007). Considerations and strategies for teaching online counseling skills: Establishing relationships in cyberspace. Counselor Education and Supervision, 46, 266–279.

Urofsky, R. I. (2013). The Council for Accreditation of Counseling and Related Educational Programs: Promoting quality in counselor education. Journal of Counseling & Development, 91, 6–14.

Wantz, R. A., Tromski, D. M., Mortsolf, C. J., Yoxtheimer, G., Brill, S., & Cole, A. (2003). Incorporating distance learning into counselor education programs: A research study. In J. W. Bloom & G. R. Walz (Eds.), Cybercounseling and cyberlearning: An encore (pp. 327–344). CAPS Press.

Watson, J. C. (2012). Online learning and the development of counseling self-efficacy beliefs. The Professional Counselor, 2, 143–151.

Whitty, M. T., & Young, G. (Eds.). (2017). Cyberpsychology: The study of individuals, society and digital technologies. Wiley.

Yarbrough, J. R. (2018). Adapting adult learning theory to support innovative, advanced, online learning– WVMD Model. Research in Higher Education Journal, 35.  


Savitri Dixon-Saxon, PhD, NCC, LPC, is Vice Provost at Walden University. Matthew R. Buckley, EdD, NCC, ACS, BC-TMH, LPC, LCMHC, is Senior Core Faculty at Walden University. Correspondence can be addressed to Savitri Dixon-Saxon, 100 Washington Ave. South, Suite 900, Minneapolis, MN 55401-2511,

The Benefits of Implementing a Feedback Informed Treatment System Within Counselor Education Curriculum

Chad M. Yates, Courtney M. Holmes, Jane C. Coe Smith, Tiffany Nielson

Implementing continuous feedback loops between clients and counselors has been found to have significant impact on the effectiveness of counseling (Shimokawa, Lambert, & Smart, 2010). Feedback informed treatment (FIT) systems are beneficial to counselors and clients as they provide clinicians with a wide array of client information such as which clients are plateauing in treatment, deteriorating or at risk for dropping out (Lambert, 2010; Lambert, Hansen, & Finch, 2001). Access to this type of information is imperative because counselors have been shown to have poor predictive validity in determining if clients are deteriorating during the counseling process (Hannan et al., 2005). Furthermore, recent efforts by researchers show that FIT systems based inside university counseling centers have beneficial training features that positively impact the professional development of counseling students (Reese, Norsworthy, & Rowlands, 2009; Yates, 2012). To date, however, few resources exist on how to infuse FIT systems into counselor education curriculum and training programs.


This article addresses the current lack of information regarding the implementation of a FIT system within counselor education curricula by discussing: (1) an overview and implementation of a FIT system; (2) a comprehensive review of the psychometric properties of three main FIT systems; (3) benefits that the use of FIT systems hold for counselors-in-training; and (4) how the infusion of FIT systems within a counseling curriculum can help assess student learning outcomes.


Overview and Implementation of a FIT System


FIT systems are continual assessment procedures that include weekly feedback about a client’s current symptomology and perceptions of the therapeutic process in relation to previous counseling session scores. These systems also can include other information such as self-reported suicidal ideation, reported substance use, or other specific responses (e.g., current rating of depressive symptomology). FIT systems compare clients’ current session scores to previous session scores and provide a recovery trajectory, often graphed, that can help counselors track the progress made through the course of treatment (Lambert, 2010). Some examples of a FIT system include the Outcome Questionnaire (OQ-45.2; Lambert et al., 1996), Session Rating Scale (SRS; Miller, Duncan, & Johnson, 2000), Outcome Rating Scale (ORS; Miller & Duncan, 2000), and the Counseling Center Assessment of Psychological Symptoms (CCAPS; Locke et al., 2011), all of which are described in this article.


Variety exists regarding how FIT systems are used within the counseling field. These variations include the selected measure or test, frequency of measurement, type of feedback given to counselors and whether or not feedback is shared with clients on a routine basis. Although some deviations exist, all feedback systems contain consistent procedures that are commonly employed when utilizing a system during practice (Lambert, Hansen, & Harmon, 2010). The first procedure in a FIT system includes the routine measurement of a client’s symptomology or distress during each session. This frequency of once-per-session is important as it allows counselors to receive direct, continuous feedback on how the client is progressing or regressing throughout treatment. Research has demonstrated that counselors who receive regular client feedback have clients that stay in treatment longer (Shimokawa et al., 2010); thus, the feedback loop provided by a FIT system is crucial in supporting clients through the therapeutic process.


The second procedure of a FIT system includes showcasing the results of the client’s symptomology or distress level in a concise and usable way. Counselors who treat several clients benefit from accessible and comprehensive feedback forms. This ease of access is important because counselors may be more likely to buy in to the use of feedback systems if they can use them in a time-effective manner.


The last procedure of FIT systems includes the adjustment of counseling approaches based upon the results of the feedback. Although research in this area is limited, some studies have observed that feedback systems do alter the progression of treatment. Lambert (2010) suggested that receiving feedback on what is working is apt to positively influence a counselor to continue these behaviors. Yates (2012) found that continuous feedback sets benchmarks of performance for both the client and the counselor, which slowly alters treatment approaches. If the goal of counseling is to decrease symptomology or increase functioning, frequently observing objective progress toward these goals using a FIT system can help increase the potential for clients to achieve these goals through targeted intervention.


Description of Three FIT Systems


Several well-validated, reliable, repeated feedback instruments exist. These instruments vary by length and scope of assessment, but all are engineered to deliver routine feedback to counselors regarding client progress. Below is a review of three of the most common FIT systems utilized in clinical practice.


The OQ Measures System

The OQ Measures System uses the Outcome Questionnaire 45.2 (OQ-45.2; Lambert et al., 1996), a popular symptomology measure that gauges a client’s current distress levels over three domains: symptomatic distress, interpersonal relations and social roles. Hatfield and Ogles (2004) listed the OQ 45.2 as the third most frequently used self-report outcome measure for adults in the United States. The OQ 45.2 has 45 items and is rated on a 5-point Likert scale. Scores range between 0 and 180; higher scores suggest higher rates of disturbance. The OQ 45.2 takes approximately 5–6 minutes to complete and the results are analyzed using the OQ Analyst software provided by the test developers. The OQ 45.2 can be delivered by paper and pencil versions or computer assisted administration via laptop, kiosk, or personal digital assistant (PDA). Electronic administration of the OQ 45.2 allows for seamless administration, scoring and feedback to both counselor and client.


Internal consistency for the OQ 45.2 is α = 0.93 and test-retest reliability is r = 0.84.  The OQ 45.2 demonstrated convergent validity with the General Severity Index (GSI) of the Symptom Checklist 90-Revised (SCL-90-R; Derogatis, 1983; r = .78, n = 115). The Outcome Questionnaire System has five additional outcome measures: (1) the Outcome Questionnaire 30 (OQ-30); (2) the Severe Outcome Questionnaire (SOQ), which captures outcome data for more severe presenting concerns, such as bipolar disorder and schizophrenia; (3) the Youth Outcome Questionnaire (YOQ), which assesses outcomes in children between 13 and 18 years of age; (4) the Youth Outcome Questionnaire 30, which is a brief version of the full YOQ; and (5) the Outcome Questionnaire 10 (OQ-10), which is used as a brief screening instrument for psychological symptoms (Lambert et al., 2010).


The Partners for Change Outcome Management System (PCOMS)

The Partners for Change Outcome Management System (PCOMS) uses two instruments, the Outcome Rating Scale (ORS; Miller & Duncan, 2000) that measures the client’s session outcome, and the Session Rating Scale (SRS; Miller et al., 2000) that measures the client’s perception of the therapeutic alliance. The ORS and SRS were designed to be brief in response to the heavy time demands placed upon counselors. Administration of the ORS includes handing the client a copy of the ORS on a sheet of letter sized paper; the client then draws a hash mark on four distinct 10-centimeter lines that indicate how he or she felt over the last week on the following scales: individually (personal well-being), interpersonally (family and close relationships), socially (work, school and friendships), and overall (general sense of well-being).


The administration of the SRS includes four similar 10-centimeter lines that evaluate the relationship between the client and counselor. The four lines represent relationship, goals and topics, approach or methods, and overall (the sense that the session went all right for me today; Miller et al., 2000). Scoring of both instruments includes measuring the location of the client’s hash mark and assigning a numerical value based on its location along the 10-centimeter line. Measurement flows from left to right, indicating higher-level responses the further right the hash mark is placed. A total score is computed by adding each subscale together. Total scores are graphed along a line plot. Miller and Duncan (2000) used the reliable change index formula (RCI) to establish a clinical cut-off score of 25 and a reliable change index score of 5 points for the ORS. The SRS has a cut-off score of 36, which suggests that total scores below 36 indicate ruptures in the working alliance.


The ORS demonstrated strong internal reliability estimates (α = 0.87-.096), a test-retest score of r = 0.60, and moderate convergent validity with measures like the OQ 45.2 (r = 0.59), which it was created to resemble (Miller & Duncan, 2000; Miller, Duncan, Brown, Sparks, & Claud, 2003). The SRS had an internal reliability estimate of α = 0.88, test-retest reliability of r = 0.74, and showed convergent validity when correlated with similar measures of the working alliance such as the Helping Alliance Questionnaire–II (HAQ–II; Duncan et al., 2003; Luborsky et al., 1996). The developers of the ORS and SRS have also created Web-based administration features that allow clients to use both instruments online using a pointer instead of a pencil or pen. The Web-based administration also calculates the totals for the instruments and graphs them.


The Counseling Center Assessment of Psychological Symptoms (CCAPS)

The CCAPS was designed as a semi-brief continuous measure that assesses symptomology unique to college-aged adults (Locke et al., 2011). When developed, the CCAPS was designed to be effective in assessing college students’ concerns across a diverse range of college campuses. The CCAPS has two separate versions, the CCAPS-62 and a shorter version, the CCAPS-34. The CCAPS-62 has 62 test items across eight subscales that measure: depression, generalized anxiety, social anxiety, academic distress, eating concerns, family distress, hostility and substance abuse. The CCAPS-34 has 34 test items across seven of the scales found on the CCAPS-62, excluding family distress. Additionally, the substance use scale on the CCAPS-62 is renamed the Alcohol Use Scale on the CCAPS-32 (Locke et al., 2011). Clients respond on a 5-point Likert scale with responses that range from not at all like me to extremely like me. On both measures clients are instructed to answer each question based upon their functioning over the last 2 weeks. The CCAPS measures include a total score scale titled the Distress Index that measures the amount of general distress experienced over the previous 2 weeks (Center for Collegiate Mental Health, 2012). The measures were designed so that repeated administration would allow counselors to compare each session’s scores to previous scores, and to a large norm group (N = 59,606) of clients completing the CCAPS at university counseling centers across the United States (Center for Collegiate Mental Health, 2012).


The CCAPS norming works by comparing clients’ scores to a percentile score of other clients who have taken the measure. For instance, a client’s score of 80 on the depressive symptoms scale indicates that he or she falls within the 80th percentile of the norm population’s depressive symptoms score range. Because the CCAPS measures utilize such a large norm base, the developers have integrated the instruments into the Titanium Schedule ™, an Electronic Medical Records (EMR) system. The developers also offer the instruments for use in an Excel scoring format, along with other counseling scheduling software programs. The developers of the CCAPS use RCI formulas to provide upward and downward arrows next to the reported score on each scale. Downward arrows indicate the client’s current score is significantly different than previous sessions’ scores and suggests progress during counseling. An upward arrow would suggest a worsening of symptomology. Cut-off scores vary across scales and can be referenced in the CCAPS 2012 Technical Manual (Center for Collegiate Mental Health, 2012).


Test-retest estimates at 2 weeks for the CCAPS-62 and CCAPS-34 scales range between r = 0.75–0.91 (Center for Collegiate Mental Health, 2012). The CCAPS-34 also demonstrated a good internal consistency that ranged between α = 0.76–0.89 (Locke et al., 2012). The measures also demonstrated adequate convergent validity compared to similar measures. A full illustration of the measures’ convergent validity can be found in the CCAPS 2012 Technical Manual (Center for Collegiate Mental Health, 2012).


Benefits for Counselors-in-Training


The benefits of FIT systems are multifaceted and can positively impact the growth and development of student counselors (Reese, Norsworthy, et al., 2009; Schmidt, 2014; Yates, 2012). Within counselor training laboratories, feedback systems have shown promise in facilitating the growth and development of beginning counselors (Reese, Usher, et al., 2009), and the incorporation of FIT systems into supervision and training experiences has been widely supported (Schmidt, 2014; Worthen & Lambert, 2007; Yates, 2012).


One such benefit is that counseling students’ self-efficacy improved when they saw evidence of their clients’ improvement (Reese, Usher, et al., 2009). A FIT system allows for the documentation of a client’s progress and when counseling students observed their clients making such progress, their self-efficacy improved regarding their skill and ability as counselors. Additionally, the FIT system allowed the counselor trainees to observe their effectiveness during session, and more importantly, helped them alter their interventions when clients deteriorated or plateaued during treatment. Counselor education practicum students who implemented a FIT system through client treatment reported that having weekly observations of their client’s progress helped them to isolate effective and non-effective techniques they had used during session (Yates, 2012). Additionally, practicum counseling students have indicated several components of FIT feedback forms were useful, including the visual orientation (e.g., graphs) to clients’ shifts in symptomology. This visual attenuation to client change allowed counselors-in-training to be more alert to how clients are actually faring in between sessions and how they could tailor their approach, particularly regarding crisis situations (Yates, 2012).


Another benefit discovered from the above study was that counseling students felt as if consistent use of a FIT system lowered their anxiety and relieved some uncertainty regarding their work with clients (Yates, 2012). It is developmentally appropriate for beginning counselors to struggle with low tolerance for ambiguity and the need for a highly structured learning environment when they begin their experiential practicums and internships (Bernard & Goodyear, 2013). The FIT system allows for a structured format to use within the counseling session that helps to ease new counselors’ anxiety and discomfort with ambiguity.


Additionally, by bringing the weekly feedback into counseling sessions, practicum students were able to clarify instances when the feedback was discrepant from how the client presented during session (Yates, 2012). This discrepancy between what the client reported on the measure and how they presented in session was often fertile ground for discussion. Counseling students believed bringing these discrepancies to a client’s attention deepened the therapeutic alliance because the counselor was taking time to fully understand the client (Yates, 2012).


Several positive benefits are added to the clinical supervision of counseling students. One such benefit is that clinical supervisors found weekly objective reports of their supervisees helpful in providing evidence of a client’s progress during session that was not solely based upon their supervisees’ self-report. This is crucial because relying on self-report as a sole method of supervision can be an insufficient way to gain information about the complexities of the therapeutic process (Bernard & Goodyear, 2013). Supervisors and practicum students both reported that the FIT system frequently brought to their attention potential concerns with clients that they had missed (Yates, 2012). A final benefit is that supervisees who utilized a FIT system during supervision had significantly higher satisfaction levels of supervision and stronger supervisory alliances than students who did not utilize a FIT system (Grossl, Reese, Norsworthy, & Hopkins, 2014; Reese, Usher, et al., 2009).


Benefits for Clients


Several benefits exist for counseling clients when FIT systems are utilized in the therapeutic process. The sharing of objective progress information with clients has been found to be perceived as helpful and a generally positive experience by clients (Martin, Hess, Ain, Nelson, & Locke, 2012). Surveying clients using a FIT system, Martin et al. (2012) found that 74.5% of clients found it “convenient” to complete the instrument during each session. Approximately 46% of the clients endorsed that they had a “somewhat positive” experience using the feedback system, while 20% of clients reported a “very positive” experience. Hawkins, Lambert, Vermeersch, Slade, and Tuttle (2004) found that providing feedback to both clients and counselors significantly increased the clients’ therapeutic improvement in the counseling process when compared to counselors who received feedback independently. A meta-analysis of several research studies, including Hawkins et al. (2004), found effect sizes of clinical efficacy related to providing per-session feedback ranged from 0.34 to 0.92 (Shimokawa et al., 2010). These investigations found more substantial improvement in clients whose counselors received consistent client feedback when compared with counselors who received no client feedback regarding the therapeutic process and symptomology. These data also showed that consistent feedback provision to clients resulted in an overall prevention of premature treatment termination (Lambert, 2010).


Utilization of FIT Systems for Counseling Curriculum and Student Learning Outcome Assessment


The formal assessment of graduate counseling student learning has increased over the past decade. The most recent update of the national standards from the Council for Accreditation of Counseling and Related Educational Programs (CACREP) included the requirement for all accredited programs to systematically track students at multiple points with multiple measures of student learning (CACREP, 2015, Section 4, A, B, C, D, E). Specifically, “counselor education programs conduct formative and summative evaluations of the student’s counseling performance and ability to integrate and apply knowledge throughout the practicum and internship” (CACREP, 2015, Section 4.E). The use of continuous client feedback within counselor education is one way to address such assessment requirements (Schmidt, 2014).


Counseling master’s programs impact students on both personal and professional levels (Warden & Benshoff, 2012), and part of this impact stems from ongoing and meaningful evaluation of student development. The development of counselors-in-training during experiential courses entails assessment of a myriad of counseling competencies (e.g., counseling microskills, case conceptualization, understanding of theory, ethical decision-making and ability to form a therapeutic relationship with clients; Haberstroh, Duffey, Marble, & Ivers, 2014). As per CACREP standards, counseling students will receive feedback during and after their practicum and internship experiences. This feedback typically comes from both the supervising counselor on site, as well as the academic department supervisor.


Additionally, “supervisors need to help their supervisees develop the ability to make effective decisions regarding the most appropriate clinical treatment” (Owen, Tao, & Rodolfa, 2005, p. 68). One suggested avenue for developing such skills is client feedback using FIT systems. The benefit of direct client feedback on the counseling process has been well documented (Minami et al., 2009), and this process can also be useful to student practice and training. Counseling students can greatly benefit from the use of client feedback throughout their training programs (Reese, Usher, et al., 2009). In this way, counselors-in-training learn to acknowledge client feedback as an important part of the counseling process, allowing them to adjust their practice to help each client on an individual basis. Allowing for a multi-layered feedback model wherein the counselor-in-training can receive feedback from the client, site supervisor and academic department supervisor has the potential to maximize student learning and growth.


Providing students feedback for growth through formal supervision is one of the hallmarks of counseling programs (Bernard & Goodyear, 2013). However, a more recent focus throughout higher education is the necessity of assessment of student learning outcomes (CACREP, 2015).  This assessment can include “systematic evaluation of students’ academic, clinical, and interpersonal progress as guideposts for program improvement” (Haberstroh et al., 2014, p. 28). As such, evaluating student work within the experiential courses (e.g., practicum and internship) is becoming increasingly important.


FIT systems provide specific and detailed client feedback regarding clients’ experiences within therapy. Having access to documented client outcomes and progress throughout the counseling relationship can provide an additional layer of information regarding student growth and skill development. For instance, if a student consistently has clients who drop out or show no improvement over time, those outcomes could represent a problem or unaddressed issue for the counselor-in-training. Conversely, if a student has clients who report positive outcomes over time, that data could show clinical understanding and positive skill development.


Student learning outcomes can be assessed in a myriad of ways (e.g., FIT systems, supervisor evaluations, student self-assessment and exams; Haberstroh et al., 2014). Incorporating multiple layers of feedback for counseling students allows for maximization of learning through practicum and internships and offers a concrete way to document and measure student outcomes.


An Example: Case Study

Students grow and develop through a wide variety of methods, including feedback from professors, supervisors and clients (Bernard & Goodyear, 2013). Implementing a FIT system into experiential classes in counseling programs allows for the incorporation of structured, consistent and reliable feedback. We use a case example here to illustrate the benefits of such implementation. Within the case study, each CACREP Student Learning Outcome that is met through the implementation of the FIT system is documented.


A counselor educator is the instructor of an internship class where students have a variety of internship placements. This instructor decides to have students implement a FIT system that will allow them to track client progress and the strength of the working alliance. The OQ 45.2 and the SRS measures were chosen because they allow students to track client outcomes and the counseling relationship and are easy to administer, score and interpret. In the beginning of the semester, the instructor provides a syllabus to the students where the following expectations are listed: (1) students will have their clients fill out the OQ 45.2 and the SRS during every session with each client; (2) students will learn to discuss and process the results from the OQ 45.2 and SRS in each session with the client; and (3) students will bring all compiled information from the measures to weekly supervision. By incorporating two FIT systems and the subsequent requirements, the course is meeting over 10 CACREP (2015) learning outcome assessment components within Sections 2 and 3, Professional Counseling Identity (Counseling and Helping Relationships, Assessment and Testing), and Professional Practice.


A student, Sara, begins seeing a client at an outpatient mental health clinic who has been diagnosed with major depressive disorder; the client’s symptoms include suicidal ideation, anhedonia and extreme hopelessness. Sara’s initial response includes anxiety due to the fact that she has never worked with someone who has active suicidal ideation or such an extreme presentation of depressed affect. Sara’s supervisor spends time discussing how she will use the FIT systems in her work with the client and reminds her about the necessities of safety assessment.


In her initial sessions with her client, Sara incorporates the OQ 45.2 and the SRS into her sessions as discussed with her supervisor (CACREP Section 2.8.E; 2.8.K). However, after a few sessions, she does not yet feel confident in her work with this client. Sara feels constantly overwhelmed by the depth of her client’s depression and is worried about addressing the suicidal ideation. Her instructor is able to use the weekly OQ 45.2 and SRS forms as a consistent baseline and guide for her work with this client and to help Sara develop a treatment plan that is specifically tailored for her client based upon the client’s symptomology (CACREP Section 2.5.H, 2.8.L). Using the visual outputs and compiled graphs of weekly data, Sara is able to see small changes that may or may not be taking place for the client regarding his depressive symptoms and overall feelings and experiences in his life. Sara’s instructor guides her to discuss these changes with the client and explore in more detail the client’s experiences within these symptoms (CACREP Section 2.5.G). By using this data with the client, Sara will be better able to help the client develop appropriate and measureable goals and outcomes for the therapeutic process (CACREP Section 2.5.I). Additionally, as a new counselor, such an assessment tool provides Sara with structure and guidance as to the important topics to explore with clients throughout sessions. For example, by using some of the specific content on the OQ 45.2 (e.g., I have thoughts of ending my life, I feel no interest in things, I feel annoyed by people who criticize my drinking, and I feel worthless), she can train herself to assess for suicidal ideation and overall diagnostic criteria (CACREP Section 2.7.C).


Additionally, Sara is receiving feedback from the client by using the SRS measure within session. In using this additional FIT measure, Sara can begin to gauge her personal approach to counseling with this client and receive imperative feedback that will help her grow as a counselor (CACREP, Section 2.5.F). This avenue provides an active dialogue between client and counselor about the work they are doing together and if they are working on the pieces that are important to the client. Her instructor is able to provide both formative and summative feedback on her overall process with the client using his outcomes as a guide to her effectiveness as a clinician (CACREP, Section 3.C). Implementing a FIT system allows for the process of feedback provision to have concrete markers and structure, ultimately allowing for a student counselor to grow in his or her ability to become self-reflective about his or her own practice.


Implications for Counselor Education


The main implications of the integration of FIT systems into counselor education are threefold: (1) developmentally appropriate interventions to support supervisee/trainee clinical growth; (2) intentional measurement of CACREP Student Learning Outcomes; and (3) specific attention to client care and therapeutic outcomes. There are a variety of FIT systems being utilized, and while they vary in scope, length, and targets of assessment, each has a brief administration time and can be repeated frequently for current client status and treatment outcome measurement. With intentionality and dedication, counselor education programs can work to implement the utilization of these types of assessment throughout counselor trainee coursework (Schmidt, 2014).


FIT systems lend themselves to positive benefits for training competent emerging counselors. Evaluating a beginning counselor’s clinical understanding and skills are a key component of assessing overall learning outcomes. When counselors-in-training receive frequent feedback on their clients’ current functioning or session outcomes, they are given the opportunity to bring concrete information to supervision, decide on treatment modifications as indicated, and openly discuss the report with clients as part of treatment.  Gathering data on a client’s experience in treatment brings valuable information to the training process. Indications of challenges or strengths with regard to facilitating a therapeutic relationship can be addressed and positive change supported through supervision and skill development. Additionally, by learning the process of ongoing assessment and therapeutic process management, counselor trainees are meeting many of the CACREP Student Learning Outcomes. The integration of FIT systems into client care supports a wide variety of clinical skill sets such as understanding of clinical assessment, managing a therapeutic relationship and treatment planning/altering based on client needs.


Finally, therapy clients also benefit through the use of FIT. Clinicians who receive weekly feedback on per-session client progress consistently show improved effectiveness and have clients who prematurely terminate counseling less often (Lambert, 2010; Shimokawa et al., 2010). In addition to client and counselor benefit, supervisors also have been shown to utilize FIT systems to their advantage. One of the most important responsibilities of a clinical supervisor is to manage and maintain a high level of client care (Bernard & Goodyear, 2013). Incorporation of a structured, validated assessment, such as a FIT system, allows for intentional oversight of the client–counselor relationship and clinical process that is taking place between supervisees and their clients.  Overall, the integration of FIT systems into counselor education would provide programs with a myriad of benefits including the ability to meet student, client and educator needs simultaneously.




FIT systems provide initial and ongoing data related to a client’s psychological and behavioral functioning across a variety of concerns. They have been developed and used as a continual assessment procedure to provide a frequent and continuous self-report by clients. FIT systems have been used effectively to provide vital mental health information within a counseling session. The unique features of FIT systems include the potential for recurrent, routine measure of a client’s symptomatology, easily accessible and usable data for counselor and client, and assistance in setting benchmarks and altering treatment strategies to improve a client’s functioning. With intentionality, counselor educator programs can use FIT systems to meet multiple needs across their curriculums including more advanced supervision practices, CACREP Student Learning Outcome Measurement, and better overall client care.



Conflict of Interest and Funding Disclosure

The author reported no conflict of interest

or funding contributions for the development

of this manuscript.






Bernard, J. M., & Goodyear, R. K. (2013). Fundamentals of clinical supervision (5th ed.). Boston, MA: Merrill.

Center for Collegiate Mental Health. (2012). CCAPS 2012 technical manual. University Park: Pennsylvania State

The Council for Accreditation of Counseling Related Academic Programs (CACREP). (2015). 2016 accreditation standards. Retrieved from

Derogatis, L. R. (1983). The SCL-90: Administration, scoring, and procedures for the SCL-90. Baltimore, MD: Clinical
Psychometric Research.

Duncan, B. L., Miller, S. D., Sparks, J. A., Claud, D. A., Reynolds, L. R., Brown, J., & Johnson, L. D. (2003). The Session Rating Scale: Preliminary psychometric properties of a “working” alliance measure. Journal of Brief Therapy, 3, 3–12.

Grossl, A. B., Reese, R. J., Norsworthy, L. A., & Hopkins, N. B. (2014). Client feedback data in supervision: Effects on supervision and outcome. Training and Education in Professional Psychology, 8, 182–188.

Haberstroh, S., Duffey, T., Marble, E., & Ivers, N. N. (2014). Assessing student-learning outcomes within a counselor education program: Philosophy, policy, and praxis. Counseling Outcome Research and Evaluation, 5, 28–38. doi:10.1177/2150137814527756

Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart, D. W., Shimokawa, K., & Sutton, S. W. (2005). A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology, 61, 155–163.

Hatfield, D., & Ogles, B. M. (2004). The use of outcome measures by psychologists in clinical practice.
Professional Psychology: Research & Practice, 35, 485–491. doi:10.1037/0735-7028.35.5.485

Hawkins, E. J., Lambert, M. J., Vermeersch, D. A., Slade, K. L., & Tuttle, K. C. (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research, 14, 308–327. doi:10.1093/ptr/kph027

Lambert, M. J. (2010). Prevention of treatment failure: The use of measuring, monitoring, & feedback in clinical practice.
Washington, DC: American Psychological Association.

Lambert, M. J., Hansen, N. B., & Finch, A. E. (2001). Patient-focused research: Using patient outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology, 69, 159–172.

Lambert, M. J., Hansen, N. B., & Harmon, S. C. (2010). Outcome Questionnaire system (The OQ system): Development and practical applications in healthcare settings. In M. Barkham, G. Hardy, & J. Mellor-Clark (Eds.), Developing and delivering practice-based evidence: A guide for the psychological therapies (pp. 141–154). New York, NY: Wiley-Blackwell.

Lambert, M. J., Hansen, N. B., Umphress, V., Lunnen, K., Okiishi, J., Burlingame, G. M., & Reisinger, C. (1996). Administration and scoring manual for the OQ 45.2. Stevenson, MD: American Professional Credentialing Services.

Locke, B. D., Buzolitz, J. S., Lei, P. W., Boswell, J. F., McAleavey, A. A., Sevig, T. D., Dowis, J. D. & Hayes, J.
(2011). Development of the Counseling Center Assessment of Psychological Symptoms-62 (CCAPS-62).
Journal of Counseling Psychology, 58, 97–109.

Locke, B. D., McAleavey, A. A., Zhao, Y., Lei, P., Hayes, J. A., Castonguay, L. G., Li, H., Tate, R., & Lin, Y. (2012). Development and initial validation of the Counseling Center Assessment of Psychological Symptoms-34 (CCAPS-34). Measurement and Evaluation in Counseling and Development, 45, 151–169. doi:10.1177/0748175611432642

Luborsky, L., Barber, J. P., Siqueland, L., Johnson, S., Najavits, L. M., Frank, A., & Daley, D. (1996). The Helping
Alliance Questionnaire (HAQ–II): Psychometric properties. The Journal of Psychotherapy Practice and
, 5, 260–271.

Martin, J. L., Hess, T. R., Ain, S. C., Nelson, D. L., & Locke, B. D. (2012). Collecting multidimensional client data using repeated measures: Experiences of clients and counselors using the CCAPS-34. Journal of College Counseling, 15, 247–261. doi:10.1002/j.2161-1882.2012.00019.x

Miller, S., & Duncan, B. (2000). The outcome rating scale. Chicago, IL: International           Center for Clinical Excellence.

Miller, S., Duncan, B., & Johnson, L. (2000). The session rating scale. Chicago, IL: International Center for Clinical

Miller, S. D., Duncan, B. L., Brown, J., Sparks, J. A., & Claud, D. A. (2003). The Outcome Rating Scale: A
preliminary study of the reliability, validity, and feasibility of a brief visual analog measure. Journal of
Brief Therapy
, 2, 91–100.

Minami, T., Davies, D. R., Tierney, S. C., Bettmann, J. E., McAward, S. M., Averill, L. A., & Wampold, B. E. (2009). Preliminary evidence on the effectiveness of psychological treatments delivered at a university counseling center. Journal of Counseling Psychology, 56, 309–320.

Owen, J., Tao, K. W., & Rodolfa, E. R. (2005). Supervising counseling center trainees in the era of evidence-based practice. Journal of College Student Psychotherapy, 20, 66–77.

Reese, R. J., Norsworthy, L. A., & Rowlands, S. R. (2009). Does a continuous feedback system improve psychotherapy outcome? Psychotherapy: Theory, Research, Practice, Training, 46, 418–431.

Reese, R. J., Usher, E. L., Bowman, D. C., Norsworthy, L. A., Halstead, J. L., Rowlands, S. R., & Chisolm, R.
R. (2009). Using client feedback in psychotherapy training: An analysis of its influence on supervision
and counselor self-efficacy. Training and Education in Professional Psychology, 3, 157–168.

Schmidt, C. D. (2014). Integrating continuous client feedback into counselor education. The Journal of Counselor Preparation and Supervision, 6, 60–71. doi:10.7729/62.1094

Shimokawa, K., Lambert, M. J., & Smart, D. W. (2010). Enhancing treatment outcome of patients at risk of treatment failure: Meta-analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting and Clinical Psychology, 78, 298–311. doi:10.1037/a0019247

Warden, S. P., & Benshoff, J. M. (2012). Testing the engagement theory of program quality in CACREP-accredited counselor education programs. Counselor Education and Supervision, 51, 127–140.

Worthen, V. E., & Lambert, M. J. (2007). Outcome oriented supervision: Advantages of adding systematic
client tracking to supportive consultations. Counselling & Psychotherapy Research, 7, 48 –53.

Yates, C. M. (2012). The use of per session clinical assessment with clients in a mental health delivery system: An
investigation into how clinical mental health counseling practicum students and practicum instructors use
routine client progress feedback
(Unpublished doctoral dissertation). Kent State University, Kent, Ohio.





Chad M. Yates is an Assistant Professor at Idaho State University. Courtney M. Holmes, NCC, is an Assistant Professor at Virginia Commonwealth University. Jane C. Coe Smith is an Assistant Professor at Idaho State University. Tiffany Nielson is an Assistant Professor at the University of Illinois at Springfield. Correspondence can be addressed to Chad M. Yates, 921 South 8th Ave, Stop 8120, Pocatello, Idaho, 83201,


Development and Factor Analysis of the Protective Factors Index: A Report Card Section Related to the Work of School Counselors

Gwen Bass, Ji Hee Lee, Craig Wells, John C. Carey, Sangmin Lee

The scale development and exploratory and confirmatory factor analyses of the Protective Factor Index (PFI) is described. The PFI is a 13-item component of elementary students’ report cards that replaces typical items associated with student behavior. The PFI is based on the Construct-Based Approach (CBA) to school counseling, which proposes that primary and secondary prevention activities of school counseling programs should focus on socio-emotional, development-related psychological constructs that are associated with students’ academic achievement and well-being, that have been demonstrated to be malleable, and that are within the range of expertise of school counselors. Teachers use the PFI to rate students’ skills in four construct-based domains that are predictive of school success. School counselors use teachers’ ratings to monitor student development and plan data-driven interventions.


Keywords: protective factors, factor analysis, school counselors, construct-based approach, student development


Contemporary models for school counseling practice (ASCA, 2012) emphasize the importance of school counselors using quantitative data related to students’ academic achievement to support professional decisions (Poynton & Carey, 2006), to demonstrate accountability (Sink, 2009), to evaluate activities and programs (Dimmitt, Carey, & Hatch, 2007), to advocate for school improvement (House & Martin, 1998) and to advocate for increased program support (Martin & Carey, 2014). While schools are data-rich environments and great emphasis is now placed on the use of data by educators, the readily available quantitative data elements (e.g., achievement test scores) are much better aligned with the work of classroom teachers than with the work of school counselors (Dimmitt et al., 2007). While teachers are responsible for students’ acquisition of knowledge, counselors are responsible for the improvement of students’ socio-emotional development in ways that promote achievement. Counselors need data related to students’ socio-emotional states (e.g., self-efficacy) and abilities (e.g., self-direction) that predispose them toward achievement so that they are better able to help students profit from classroom instruction and make sound educational and career decisions (Squier, Nailor, & Carey, 2014). Measures directly associated with constructs related to socio-emotional development are not routinely collected or used in schools. The development of sound and useful measures of salient socio-emotional factors that are aligned with the work of school counselors and that are strongly related to students’ academic success and well-being would greatly contribute to the ability of counselors to identify students who need help, use data-based decision making in planning interventions, evaluate the effectiveness of interventions, demonstrate accountability for results, and advocate for students and for program improvements (Squier et al., 2014).


Toward this end, we developed the Protective Factors Index (PFI) and describe herein the development and initial exploratory and confirmatory factors analyses of the PFI. The PFI is a 13-item component of elementary students’ report cards that replaces typical items associated with student deportment. The PFI is based on the Construct-Based Approach (CBA) to school counseling (Squier et al., 2014), which is based on the premise that primary and secondary prevention activities of school counseling programs should be focused on socio-emotional development-related psychological constructs that have been identified by research to be associated strongly with students’ academic achievement and well-being, that have been demonstrated to be malleable, and that are within the range of expertise of school counselors. The CBA clusters these constructs into four areas reflecting motivation, self-direction, self-knowledge and relationship competence.


The present study was conducted as collaboration between the Ronald H. Fredrickson Center for School Counseling Outcome Research and Evaluation and an urban district in the Northeastern United States. As described below, the development of the PFI was guided by the CBA-identified clusters of psychological states and processes (Squier et al., 2014). With input from elementary counselors and teachers, a 13-item report card and a scoring rubric were developed, such that teachers could rate each student on school counseling-related dimensions that have been demonstrated to underlie achievement and well-being. This brief measure was created with considerable input from the school personnel who would be implementing it, with the goal of targeting developmentally appropriate skills in a way that is efficient for teachers and useful for counselors. By incorporating the PFI into the student report card, we ensured that important and useful student-level achievement-related data could be easily collected multiple times per year for use by counselors. The purpose of this study was to explore relationships between the variables that are measured by the scale and to assess the factor structure of the instrument as the first step in establishing its validity. The PFI has the potential to become an efficient and accurate way for school counselors to collect data from teachers about student performance.




Initial Scale Development

The PFI was developed as a tool to gather data on students’ socio-emotional development from classroom teachers. The PFI includes 13 items on which teachers rate students’ abilities related to four construct-based standards: motivation, self-direction, self-knowledge and relationships (Squier et al., 2014). These four construct clusters are believed to be foundational for school success (Squier et al., 2014). Specific items within a cluster reflect constructs that have been identified by research to be strongly associated with achievement and success.


The PFI assessment was developed through a collaborative effort between the research team and a group of district-level elementary school administrators and teachers. The process of creating the instrument involved an extensive review of existing standards-based report cards, socio-emotional indicators related to different student developmental level, and rating scales measuring identified socio-emotional constructs. In addition, representatives from the district and members of the research team participated in a two-day summer workshop in August of 2013. These sessions included school counselors and teachers from each grade level, as well as a teacher of English language learners, a special education representative, and principals. All participants, except the principals, were paid for their time. Once the draft PFI instrument was completed, a panel of elementary teachers reviewed the items for developmental appropriateness and utility. The scale was then adopted across the district and piloted at all four (K–5) elementary schools during the 2013–2014 school year as a component of students’ report cards.


The PFI component of the report card consists of 13 questions, which are organized into four segments, based on the construct-based standards: motivation (4 items), self-direction (2 items), self-knowledge (3 items) and relationships (4 items). The items address developmentally appropriate skills in each of these domains (e.g., demonstrates perseverance in completing tasks, seeks assistance when needed, works collaboratively in groups of various sizes). The format for teachers to evaluate their students includes dichotomous response options: “on target” and “struggling.” All classroom teachers receive the assessment and the scoring rubric that corresponds to their grade level. The rubric outlines the observable behaviors and criteria that teachers should use to determine whether or not a student demonstrates expected, age-appropriate skills in each domain. Because the PFI instrument is tailored to address developmentally meaningful competencies, three rubrics were developed to guide teacher ratings at kindergarten and first grade, second and third grade, and fourth and fifth grade.


At the same time that the PFI scale was developed, the district began using a computer-based system to enter report card data. Classroom teachers complete the social-emotional section of the standards-based report card electronically at the close of each marking period, when they also evaluate students’ academic performance. The data collected can be accessed and analyzed electronically by school administrators and counselors. Additionally, data from two marking periods during the 2013–2014 school year were exported to the research team for analysis (with appropriate steps taken to protect students’ confidentiality). These data were used in the exploratory and confirmatory factor analyses described in this paper.



The PFI was adopted across all of the school district’s four elementary schools, housing grades kindergarten through fifth. All elementary-level classroom teachers completed the PFI for each of the students in their classes. The assessment was filled out three times during the 2013–2014 school year, namely in December, March and June. The data collected in the fall and winter terms were divided into two sections for analysis. Data from the December collection (N = 1,158) was used for the exploratory factor analysis (EFA) and data from the March collection was randomly divided into two subsamples (subsample A = 599 students and subsample B = 591 students) in order to perform the confirmatory factor analysis (CFA).


The sample for this study was highly diverse: 52% were African American, 17% were Asian, 11% were Hispanic, 16% were Caucasian, and the remaining students identified as multi-racial, Pacific Islander, Native Hawaiian, or Native American. In the EFA, 53.2% (n = 633) of the sample were male and 46.8% (n = 557) of the sample were female. Forty-seven kindergarten students (3.9%), 242 first-grade students (20.3%), 216 second-grade students (18.2%), 222 third-grade students (18.7%), 220 fourth-grade students (18.5%), and 243 fifth-grade students (20.4%) contributed data to the EFA.


The first CFA included data from 599 students, 328 males (54.8%) and 271 females (45.2%). The data included 23 kindergarten students (3.8%), 136 first-grade students (22.7%), 100 second-grade students (16.7%), 107 third-grade students (17.9%), 102 fourth-grade students (17.0%), and 131 fifth-grade students (21.9%). The data analyzed for the second CFA included assessments of 591 students, 305 males (51.6%) and 286 females (48.4%). The data consisted of PFI assessments from 24 kindergarten students (4.1%), 106 first-grade students (17.9%), 116 second-grade students (19.6%), 115 third-grade students (19.5%), 118 fourth-grade students (20.0%), and 112 fifth-grade students (19.0%).



Classroom teachers completed PFI assessments for all students in their class at the close of each marking period using the rubrics described above. Extracting the data from the district’s electronic student data management system was orchestrated by the district’s information technology specialist in collaboration with members of the research team. This process included establishing mechanisms to ensure confidentiality, and identifying information was extracted from student records.


Data Analyses

The PFI report card data was analyzed in three phases. The first phase involved conducting an EFA at the conclusion of the first marking period. The second phase was to randomly select half of the data compiled during the second marking period and perform a confirmatory factor analysis. Finally, the remaining half of the data from the second marking period was analyzed through another CFA.


Phase 1. Exploratory factor analysis. An initial EFA of the 13 items on the survey instrument was conducted using the weighted least squares mean adjusted (WLSM) estimation with the oblique rotation of Geomin. The WLSM estimator appropriately uses tetrachoric correlation matrices if items are categorical (Muthén, du Toit, & Spisic, 1997). The EFA was conducted using Mplus version 5 (Muthén & Muthén, 1998–2007).


Model fit was assessed using several goodness-of-fit indices: comparative fit index (CFI), Tucker-Lewis Index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). We assessed model fit based on the following recommended cutoff values from Hu and Bentler (1999): CFI and TLI values greater than 0.95, RMSEA value less than 0.06, and SRMR value less than 0.08.


     Phase 2. First confirmatory factor analysis. An initial CFA was conducted on the 13 items from the instrument survey to assess a three-factor measurement model that was based on theory and on the results yielded through the exploratory analysis. Figure 1 provides the conceptual path diagram for the measurement model. Six items (3, 4, 6, 7, 11 and 13) loaded on factor one (C1), which is named “academic temperament.” Three items (8, 9 and 12) loaded on factor two (C2), which is referred to as “self-knowledge.” Four items (1, 2, 5 and 10) loaded on factor three (C3), which is titled “motivation.” All three latent variables were expected to be correlated in the measurement model.


This CFA was used to assess the measurement model with respect to fit as well as convergent and discriminant validity. Large standardized factor loadings, which indicate strong inter-correlations among items associated with the same latent variable, support convergent validity. Discriminant validity is evidenced by correlations among the latent variables that are less than the standardized factor loadings; that is, the latent variables are distinct, albeit correlated (see Brown, 2006; Kline, 2011; Schumacker & Lomax, 2010).


The computer program Mplus 5 (Muthén & Muthén, 1998-2007) was used to conduct the CFA with weighted least square mean and variance adjusted (WLSMV) estimation. This is a robust estimator for categorical data in a CFA (Brown, 2006). For the CFA, Mplus software provides fit indices of a given dimensional structure that can be interpreted in the same way as they are interpreted when conducting an EFA.


     Phase 3. Second confirmatory factor analysis. A second CFA was conducted for cross-validation. This second CFA was conducted on the 13 items from the instrument survey to assess a three-factor measurement model that was based on the results yielded through the first confirmatory factor analysis. The same computer program and estimation tactics were used to conduct the second CFA.



Phase 1. Exploratory Factor Analysis

Complete descriptive statistics for the responses to each of the 13 items are presented in Table 1. The response categories for all questions are dichotomous and also identified in Table 1 as “On Target” or “Struggling,” while incomplete data are labeled “Missing.” A total of 1,158 surveys were analyzed through the EFA. The decision to retain factors was initially guided by visually inspecting the scree plot and eigenvalues. The EFA resulted in two factors with eigenvalues greater than one (one-factor = 8.055, two-factor = 1.666, and three-factor = 0.869). In addition, the scree test also supported the idea that two factors were retained because two factors were left of the point where the scree plot approached asymptote. However, considering goodness-of-fit indices, the models specifying a three-factor structure and four-factor structure fit the data well. Methodologists have suggested that “underfactoring” is more problematic than “overfactoring” (Wood, Tataryn, & Gorsuch, 1996). Thus, there was a need to arrive at a factor solution that balanced plausibility and parsimony (Fabrigar, Wegener, MacCallum, & Strahan, 1999).

Methodologists (e.g., Costello & Osborne, 2005; Fabrigar et al., 1999) have indicated that when the number of factors to retain is unclear, conducting a series of analyses is appropriate. Therefore, two-, three-, and four-factor models were evaluated and compared to determine which model might best explain the data in the most parsimonious and interpretable fashion. In this case, the two-factor model was eliminated because it did not lend itself to meaningful interpretability. The four-factor model was excluded because one of the factors was related to only one item, which is not recommended (Fabrigar et al., 1999). Researchers evaluated models based on model fit indices, item loadings above 0.40 (Kahn, 2006), and interpretability (Fabrigar et al., 1999).


The three-factor measurement model fit the data well (RMSEA = 0.052, SRMR = 0.036, CFA = 0.994, TLI = 0.988, χ2 = 173.802, df = 42, p < 0.001). As shown in Table 2, the standardized factor loadings were large, ranging from 0.58 to 0.97. The first factor included six items. Items reflected students’ abilities at emotional self-control and students’ abilities to maintain good social relationships in school (e.g., demonstrates resilience after setbacks and works collaboratively in groups of various sizes). This first factor was named “academic temperament.”
The second factor included three items. All of the items reflected the understanding that students have about their own abilities, values, preferences and skills (e.g., identifies academic strengths and abilities and identifies things the student is interested in learning). This second factor was named “self-knowledge.” The third factor included four items. All of the items reflected personal characteristics that help students succeed academically by focusing and maintaining energies on goal-directed activities (e.g., demonstrates an eagerness to learn and engages in class activities). This third factor was named “motivation.” The three-factor measurement model proved to have parsimony and interpretability.


The two-factor model did not fit the data as well as the three-factor model (RMSEA = 0.072, SRMR = 0.058, CFA = 0.985, TLI = 0.978, χ2 = 371.126, df = 53, p < 0.001). As shown in Table 2, the standardized factor loadings were large, ranging from 0.59 to 0.94. The first factor included seven items. This first factor reflected self-knowledge and motivation. It was more appropriate to differentiate self-knowledge and motivation considering interpretability. The two-factor model provided relatively poor goodness-of-fit indices and interpretability.


The four-factor model fit the data slightly better than the three-factor model (RMSEA = 0.035, SRMR = 0.023, CFA = 0.998, TLI = 0.995, χ2 = 76.955, df = 32, p < 0.001). As shown in Table 2, the standardized factor loadings were large, ranging from 0.54 to 1.01. The first factor included one item, however, and retained factors should include at least three items that load 0.05 or greater (Fabrigar et al., 1999), so the first factor was removed. The second factor was comprised of six items that all relate to the construct of academic temperament. The third factor includes four items that reflect motivation. The fourth factor is composed of three items that relate to self-knowledge. The four-factor model was strong in terms of goodness-of-fit indices, though it was not possible to retain the first factor methodologically, due to the fact that it only involved one item. Therefore, given a series of analyses, the three-factor model was selected as the most appropriate.


Phase 2. First Confirmatory Factor Analysis

Complete descriptive statistics for the items are presented in Table 3. The responses for all items were dichotomous. A total of 569 (95.0%) of 599 surveys were completed and were used in the first CFA.





The three-factor measurement model provided good fit to the data (RMSEA = 0.059, CFI = 0.974, TLI = 0.984, χ2 = 104.849, df = 35, p < 0.001). Table 4 reports the standardized factor loadings, which

can be interpreted as correlation coefficients, for the three-factor model. The standardized factor loadings were statistically significant (p < 0.001) and sizeable, ranging from 0.72 to 0.94. The large standardized factor loadings support convergent validity in that each indicator was primarily related to the respective underlying latent variable. Table 5 reports the correlation coefficients among the three latent variables. The correlation coefficients were less than the standardized factor loadings, thus supporting discriminant validity.




Phase 3. Second Confirmatory Factor Analysis

Complete descriptive statistics for the items are presented in Table 3. The type of responses for all items was dichotomous. A total of 564 (95.4%) of 591 surveys had all the items complete and were used in the first CFA.


The second CFA was conducted on the three-factor measurement model to cross-validate the results from the first CFA. The three-factor model provided acceptable fit to the data in this second CFA (RMSEA = 0.055, CFI = 0.976, TLI = 0.983, χ2 = 100.032, df = 37, p < 0.001). Table 4 reports the standardized factor loadings, which can be interpreted as correlation coefficients, for the three-factor model. The standardized factor loadings were significantly large, ranging from 0.70 to 0.93. These large standardized factor loadings support convergent validity in that each indicator was largely related to the respective underlying latent variable. Table 5 reports the correlation coefficients among the three latent variables. The correlation coefficients were less than the standardized factor loadings so that discriminant validity was supported. Given these results, it appears that the three-factor model is the most reasonable solution.




The ASCA National Model (2012) for school counseling programs underscores the value of using student achievement data to guide intervention planning and evaluation. This requires schools to find ways to collect valid and reliable information that provides a clear illustration of students’ skills in areas that are known to influence academic achievement. The purpose of developing the PFI was to identify and evaluate socio-emotional factors that relate to students’ academic success and emotional health, and to use the findings to inform the efforts of school counselors. The factor analyses in this study were used to explore how teachers’ ratings of students’ behavior on the 13-item PFI scale clustered around specific constructs that research has shown are connected to achievement and underlie many school counseling interventions. Because the scoring rubrics are organized into three grade levels (kindergarten and first grade, second and third grade, and fourth and fifth grade), the behaviors associated with each skill are focused at an appropriate developmental level. This level of detail allows teachers to respond to questions about socio-emotional factors in ways that are consistent with behaviors that students are expected to exhibit at different ages and grade levels.


Considering parsimony and interpretability, the EFA and two CFAs both resulted in the selection of a three-factor model as the best fit for the data. Through the EFA, we compared two-, three- and four-factor models. The three-factor model showed appropriate goodness-of-fit indices, item loadings and interpretability. Additionally, the two CFAs demonstrated cross-validation of the three-factor model. In this model, the fundamental constructs associated with students’ academic behavior identified are “academic temperament,” “self-knowledge,” and “motivation.” “Self-knowledge” and “motivation” correspond to two of the four construct clusters identified by Squier et al. (2014) as critical socio-emotional dimensions related to achievement. The “academic temperament” items reflected either self-regulation skills or the ability to engage in productive relationships in school. Squier et al. (2014) differentiated between self-direction (including emotional self-regulation constructs) and relationship skills clusters.


Although not perfectly aligned, this factor structure of the PFI is consistent with the CBA model for clustering student competencies and corresponds to previous research on the links between construct-based skills and academic achievement. Teacher ratings on the PFI seemed to reflect their perceptions that self-regulation abilities and good relationship skills are closely related constructs. These results indicate that the PFI may be a useful instrument for identifying elementary students’ strengths and needs in terms of exhibiting developmentally appropriate skills that are known to influence academic achievement and personal well-being.


Utility of Results

The factor analysis conducted in this study suggests that the PFI results in meaningful data that can allow for data-based decision making and evaluation. This tool has possible implications for school counselors in their efforts to provide targeted support, addressing the academic and socio-emotional needs of elementary school students. The PFI can be completed in conjunction with the academic report card and it is minimally time-intensive for teachers. In addition to school-based applications, the socio-emotional information yielded is provided to parents along with their child’s academic report card. This has the potential to support school–home connections that could prove useful in engaging families in interventions, which is known to be beneficial. Finally, the instrument can help school counselors identify struggling students, create small, developmentally appropriate groups based on specific needs, work with teachers to address student challenges that are prevalent in their classrooms, evaluate the success of interventions, advocate for program support, and share their work with district-level administrators. The PFI could come to be used like an early warning indicator to identify students who are showing socio-emotional development issues that predispose toward disengagement and underachievement.


The PFI also may prove useful as a school counseling evaluation measure. Changes on PFI items (and perhaps on subscales related to the three underlying dimensions identified in the present study) could be used as data in the evaluation of school counseling interventions and programs. Such evaluations would be tremendously facilitated by the availability of data that is both within the domain of school counselors’ work and that is known to be strongly related to achievement.


The findings offer great promise in terms of practical implications for school personnel and parents. This analysis quite clearly illustrates “academic temperament,” “self-knowledge” and “motivation” as factors that are demonstrated to be foundational to school success. The results indicate that the teachers’ ratings of students’ behavior align with findings of existing research and, thus, that the instrument is evaluating appropriate skills and constructs.


Implications for School Counselors

The PFI was developed as a data collection tool that could be easily integrated into schools for the purpose of assessing students’ development of skills that correspond to achievement-related constructs. Obtaining information about competencies that underlie achievement is critical for school counselors, who typically lead interventions that target such skills in an effort to improve academic outcomes. Many developmental school counseling curricula address skills that fall within the domains of “academic temperament,” “self-knowledge,” and “motivation” (see: for a complete list of socio-emotional learning programs). Teachers can complete the PFI electronically, at the same intervals as report cards and in a similarly user-friendly format. Therefore, the PFI facilitates communication between teachers and school counselors regularly throughout the school year. Counselors can use the data to identify appropriate interventions and to monitor students’ responsiveness to school counseling curricula over time and across settings. Although not included in this analysis, school counselors could also measure correlations between PFI competencies and achievement to demonstrate how academic outcomes are impacted by school counseling interventions and curricula.


Limitations and Further Study

Despite the promising findings on these factor analyses, further research is needed to confirm these results and to address the limitations of the present study. Clearly, additional studies are needed to confirm the reliability of PFI teacher ratings and future research should explore inter-rater reliability. Further research also is needed to determine if reliable and valid PFI subscales can be created based on the three dimensions found in the present study. Test-retest reliability, construct validity and subscale inter-correlations should be conducted to determine if PFI subscales with adequate psychometric characteristics can be created. Subsequent studies should consider whether students identified by the PFI as being in need of intervention also are found by other measures to be in need of support. Another important direction for future research is to examine the relationships between teachers’ ratings of students’ socio-emotional skills on the PFI and the students’ academic performance. Establishing a strong link between the PFI and actual academic achievement is an essential step to documenting the potential utility of the index as a screening tool. As this measure was developed to enhance data collection for data-based decision making, future research should explore school counselors’ experiences with implementation as well as qualitative reporting on the utility of PFI results for informing programming.


Although the present study suggests that the PFI in its current iteration is quite useful, practically speaking, researchers may consider altering the tool in subsequent iterations. One possible revision involves changing the format from dichotomous ratings to a Likert scale, which could allow for teachers to evaluate student behavior with greater specificity and which would benefit subscale construction. Another change that could be considered is evaluating the rubrics to improve the examples of student behavior that correspond to each rating on the scale and to ensure that each relates accurately to expectations at each developmental level. Furthermore, most of the items on the current PFI examine externalizing behaviors, which poses the possibility that students who achieve at an academically average level, but who experience more internalizing behaviors (such as anxiety), might not be identified for intervention. Subsequent iterations of the PFI could include additional areas of assessment, such as rating school behavior that is indicative of internalized challenges. Finally, it will be important to evaluate school counselors’ use of the PFI to determine if it actually provides necessary information for program planning and evaluation in an efficient, cost-effective fashion as is intended.

Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest

or funding contributions for the development

of this manuscript.





American School Counselor Association. (2012). The ASCA National Model: A Framework for School Counseling
(3rd ed.). Alexandria, VA: Author.

Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford.

Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for
getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(7), 1–9.

Dimmitt, C., Carey, J. C., & Hatch, T. (Eds.) (2007). Evidence-based school counseling: Making a difference with data-driven practices. Thousand Oaks, CA: Corwin.

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory
factor analysis in psychological research. Psychological Methods4, 272–299.

House, R. M., & Martin, P. J. (1998). Advocating for better futures for all students: A new vision for school
counselors. Education, 119, 284–291.

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional
criteria versus new alternatives. Structural Equation Modeling6, 1–55. doi:10.1080/10705519909540118

Kahn, J. H. (2006). Factor analysis in counseling psychology research, training, and practice – principles,
advances, and applications. The Counseling Psychologist34, 684–718. doi:10.1177/0011000006286347

Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY: Guilford.

Martin, I., & Carey, J. (2014). Development of a logic model to guide evaluations of the ASCA National Model
for School Counseling Programs. The Professional Counselor, 4, 455–466. doi:10.15241/im.4.5.455

Muthén, B. O., du Toit, S. H. C., & Spisic, D. (1997). Robust inference using weighted least squares and
quadratic estimating equations in latent variable modeling with categorical and continuous
outcomes. Psychometrika75, 1–45.

Muthén, L. K., & Muthén, B. O. (1998–2007). Mplus user’s guide (5th ed.). Los Angeles, CA: Muthén & Muthén.

Poynton, T. A., & Carey, J. C. (2006). An integrative model of data-based decision making for school
counseling. Professional School Counseling10, 121–130.

Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed.). New York,
NY: Routledge.

Sink, C. A. (2009). School counselors as accountability leaders: Another call for action. Professional School
13, 68–74. doi:10.5330/PSC.n.2010-13.68

Squier, K. L., Nailor, P., & Carey, J. C. (2014). Achieving excellence in school counseling through motivation, self-
direction, self-knowledge and relationships
. Thousand Oaks, CA: Corwin.

Wood, J. M., Tataryn, D. J., & Gorsuch, R. L. (1996). Effects of under-and overextraction on principle axis factor
analysis with varimax rotation. Psychological methods1, 354–365. doi:10.1037//1082-989X.1.4.354



Gwen Bass is a doctoral researcher at the Ronald H. Fredrickson Center for School Counseling Outcome Research at the University of Massachusetts. Ji Hee Lee is a doctoral student at Korea University in South Korea and Center Fellow of the Ronald H. Frederickson Center for School Counseling Outcome Research at the University of Massachusetts. Craig Wells is an Associate Professor at the University of Massachusetts. John C. Carey is a Professor of School Counseling and the Director of the Ronald H. Frederickson Center for School Counseling Outcome Research at the University of Massachusetts. Sangmin Lee is an Associate Professor at Korea University. Correspondence can be addressed to Gwen Bass, School of Cognitive Science, Adele Simmons Hall, Hampshire College, 893 West Street, Amherst, MA 01002,