Sep 23, 2016 | Volume 6 - Issue 4
Maribeth F. Jorgensen, Kathleen Brown-Rice
The use of objective methods in gatekeeping processes has become increasingly more important due to legal and ethical implications and consequences. For example, the medical field has utilized criminal background checks (CBCs) as a gatekeeping assessment of a student’s ability to best serve future patients. This article focuses on the current use of CBCs by master’s-level counselor education programs (N = 83) accredited by the Council for Accreditation of Counseling and Related Educational Programs (CACREP). A significant implication from this study is the need for counselor education to consider best practices and guidelines for the use of CBCs.
Keywords: criminal background, criminal background checks, gatekeeping, counselor education, counseling programs
Counselor educators and supervisors are ethically bound to not endorse any counselor-in-training (CIT) for certification, licensure, employment or completion of an academic program when they believe a CIT is not qualified for the endorsement (American Counseling Association [ACA], 2014). In particular, educators are required to screen all counseling program applicants prior to admission and to continually and thoroughly evaluate and appraise students during their progression through the program (Erwin & Toomey, 2005). It has been suggested that utilizing criminal background checks (CBCs) with students should be part of the gatekeeping process in behavioral health programs (Brodersen, Swick, & Richman, 2009; Cowburn & Nelson, 2008; Erwin & Toomey, 2005). In fact, government agencies and private and public employers are increasing their use of CBCs as a screening mechanism (Sheets & Kappel, 2007). CBCs may be conducted to determine if an individual is a potential threat to clients, vulnerable populations or fellow employees. According to Sheets and Kappel (2007), “Because most consumers are not in the position to run CBCs . . . they depend on professional licensing boards to conduct appropriate screening of applicants” (p. 64). This could be a concern, however, because CITs work with clients while they are in their training program. Counseling programs that do not have access to CBC data may be left without critical information to help best protect vulnerable populations. Therefore, the responsibility of having CBC results might more appropriately fall on counselor educators (ACA, 2014).
All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands require a CBC for school counselors (American Counseling Association, Office of Public Policy and Legislation, 2011). According to ACA (2010), as of 2010 six states (i.e., Arizona, Maine, Mississippi, Missouri, Montana, Tennessee) required a CBC as part of the licensure application process. North Carolina requires applicants to sign a statement authorizing the licensing board to conduct a full criminal record search, including state and federal records (North Carolina Board of Licensed Professional Counselors 2013). The state of Washington requires applicants to submit fingerprints as a means to perform a professional criminal background check. Given that passing a CBC is a criterion for certification or licensure for professional counselors in some jurisdictions, it seems important to examine if counselor education programs are utilizing CBCs as part of the admission process, student evaluation for CITs, and ultimately as a tool for gatekeeping.
Gatekeeping in the Field
According to Kerl and Eichler (2005), “In the field of counselor education, gatekeepers are the professionals whose responsibility it is to open or close the gates on the path toward becoming a counselor” (p. 74). The Council for Accreditation of Counseling and Related Educational Programs (CACREP) requires counseling programs to start the gatekeeping process at the onset of screening applicants for admission. Unfortunately, there is ambiguity about specific ways to gatekeep during the admission process, which may prompt inconsistencies between those operating as gatekeepers. Several studies have examined barriers to effective gatekeeping (Brear & Dorrian, 2010; Brodersen et al., 2009; Brown-Rice & Furr, 2014). Some of the barriers include a need to meet desired enrollment, inconsistent screening procedures, likability effect, inadequate training on how to be a gatekeeper, social loafing, the leniency effect, and the empathy veil effect (Brear & Dorrian, 2010; Brown-Rice & Furr, 2014). The previous findings support the need to examine the current use of objective measures that may diminish some of these described obstacles.
Swank and Smith-Adcock (2014) examined the screening and gatekeeping methods used by 79 master’s- and doctoral-level CACREP-accredited counseling programs. Specifically, they asked programs about their use and perceived effectiveness of objective (e.g., grade point average [GPA]) and subjective (e.g., interviews) methods of gatekeeping during the admission process. The majority of surveyed programs placed higher weight on GPA and letters of recommendation during the admission process. Participants described their methods as inefficient and stressed the need to use consistent evaluation to reduce the impact of subjectivity. They also described a desire to use reliable assessments such as formal background checks to better assess psychological fit (Swank & Smith-Adcock, 2014).
Brear and Dorian (2010) conducted a study to examine how 63 counseling educators experienced their training and training as gatekeepers. Their respondents indicated a commitment to be effective gatekeepers, but they had difficulties minimizing their subjectivity because of vague guidelines and written policies. Many of their participants stated they observed other faculty being lenient and failing to capitalize on key moments when students were displaying behaviors of concern. Brear and Dorrian suggested that programs use objective procedures for gatekeeping and provide ongoing training to help faculty better understand their gatekeeper roles and related policies.
Brown-Rice and Furr (2014) discussed the role empathy can play in the gatekeeping process. Ultimately, the authors suggested that counselor educators benefit from finding a balance between being empathic and evaluative in their roles. Brown-Rice and Furr described that empathy may impact how counselor educators gatekeep and intervene with problematic behavior. They coined the term empathy veil effect and suggested that it is compounded by factors such as lack of consistent standards across faculty, lack of scholarly sources to refer, and fears of legal retaliation made by students. Although these factors have historically been barriers, the field of counselor education is at a critical point to establish well-documented, researched and supported screening procedures for potential CITs. This study aims to provide a greater description of how counseling programs currently use CBCs in the process of gatekeeping.
Criminal Background Checks
Literature searches revealed only one study that explored the use of CBCs by counseling programs (Erwin & Toomey, 2005). This is concerning given that some states require CBCs of school counselors and licensure candidates. Over 10 years ago, Erwin and Toomey (2005) conducted a study of 50 CACREP-accredited counseling programs to examine use of CBCs. Specifically, they sought to gather data about how counseling programs use criminal background checks and what resources are consulted when deciding how and when to use CBCs. At the time of their study and within their sample, five CACREP-accredited counseling programs were utilizing CBCs. Alarmingly, none of the programs that indicated use of CBCs answered the question about having established criteria to decide how criminal background check results are used.
Scholars within other human services fields have provided commentary or empirically explored the use of CBCs in their related training programs. Burns, Frank-Stromborg, Teytelman, and Herren (2004) wrote about the use of CBCs in the field of nursing. At the time of their commentary, most state nursing licensure boards made CBCs mandatory for nurses in order to practice. In contrast with nursing licensing boards, most nursing training programs had not made CBCs a requirement due to not having sufficient guidance in how to use the results of CBCs.
Farnsworth and Springer (2006) empirically investigated the use of CBCs by nursing programs. They surveyed 258 nursing schools from across the United States and found that fewer than 50% of the surveyed schools required background checks. Only 8% of the schools that conducted CBCs used them as a part of the admission process. For those that did obtain background checks, there was no standard way to process the results and no universal guidelines were available on how to interpret results. Farnsworth and Springer suggested that schools considering CBCs should seek legal counsel and communicate with other programs using CBCs. They also recommended programs require a criminal self-disclosure in addition to a background check to determine consistencies between self-disclosures and the results of CBCs (Farnsworth & Springer, 2006).
According to Kleshinski, Case, Davis, Heinrich, and Witzburg (2011), approximately 113 medical schools used background checks at the time of their commentary. Medical schools have benefitted from using CBCs by detecting patterns of behaviors that may impede a student’s ability to practice and best serve future patients. Kleshinski and colleagues found that common patterns across medical schools using CBCs included: (1) individually considering each situation by factoring in variables such as date and nature of offense; and (2) asking students about past criminal behaviors on admission applications. Importantly, there may be discrepancies between what students report on applications and what their CBCs show; therefore, solely relying on self-report could be problematic.
Within the field of sports science, Weuve, Martin, and White (2008) described many of the same concerns and uncertainties. They suggested that common reasons to conduct CBCs include “promotion of a safe school environment, protection of patients, clients, and student-athletes, because it is required of clinical facilities, and it enhanced student advisement and compliance with state or federal law” (Weuve et al., 2008, p. 28). These authors also speculated that programs may not conduct CBCs because of certain state and federal law, fear of further marginalizing minorities, and due to minimal resources to help the process be informed. Although these suggestions and concerns seem to be well-conceptualized across fields, few studies have taken the next step to empirically examine these issues.
Based on previous literature, there is consistent concern with a lack of universal policies across graduate training programs related to the use of CBCs. Additionally, only one study has empirically investigated how often and in what ways CBCs are being used with counseling graduate school applications (Erwin & Toomey, 2005). Unfortunately, this study is outdated and may leave the field of counseling without adequate evidence-based support to enhance their gatekeeping processes.
Currently, when programs are deciding to use CBCs, they will find minimal information about key aspects such as what company or vendor to use when conducting CBCs; who is financially liable for the CBC; when a CBC should be required; how information from CBCs are used; how students are informed about CBCs; and how to decide if an offense is related to the counseling profession (Weuve et al., 2008). Counseling programs could be held liable for not conducting CBCs, especially if the safety of others is compromised. At the same time, counseling programs also could face liability for using CBCs when guidelines are unclear, applicants are not informed, and policies are not in place about how CBC results may be used.
Given the limited research on this issue, the purpose of this study was to determine how CACREP-accredited master’s programs are utilizing CBCs regarding applicants and current students. Specifi-cally, the following research questions were addressed: (a) Do CACREP-accredited master’s programs require applicants to undergo a CBC? (b) What are the program’s procedures for performing the CBC of applicants? (c) Do programs have established protocols regarding how the results of CBCs affect applicants? ( d) Do CACREP-accredited master’s programs require current students to undergo a CBC? (e) What are the program’s procedures for performing the CBCs of current students? (f) Do programs have established protocols regarding how the results of CBCs affect current students? and (g) What do CACREP program representatives believe are their legal and ethical obligations related to performing CBCs with applicants or current students?
Methodology
Participants and Procedures
Participants were the program contacts for the 270 CACREP-accredited master’s programs listed on the official CACREP Web site in summer of 2013. Due to the small size of this population, the entire population was sampled to provide the best approximation of the population’s true characteristics (Gay, Mills, & Airasian, 2009). Recruitment of participants was conducted via an e-mail to each program contact inviting them to participate in the study and including a link to an online survey. The sample size decreased due to invalid e-mail addresses, which resulted in the final sample of 261 CACREP-accredited program contacts. A total of 86 participants completed the survey; however, respondents with missing or invalid data (n = 3, less than 2%) were eliminated via listwise deletion, leaving a total number of 83 participants included in this study. Although there are multiple options for dealing with missing data, listwise deletion was used by eliminating participants with missing data on any of the variables in this study (Sterner, 2011). This resulted in a final response rate of 32%, which falls within the acceptable 30% response rate for online surveys (University of Texas at Austin, Division of Instructional Innovation and Assessment, 2011). Of the 86 program contacts who provided usable data, 29 indicated their programs were in the South, 28 defined their program being in the Northeast, 17 stated their program was in the Midwest, and 9 indicated that their program was in the West. The majority of the participants reported that their programs offered degrees in both the clinical mental health/community track (84%) and the school track (83%). Further, 17% offered the marriage, couple, and family track, 13% offered the student affairs/college track, 6% had the addiction track, and 4% reported offering the career track to students. Table 1 provides a breakdown of specialty track programs offered by participants.
The survey for the current study was designed based on the Criminal Background Check Survey developed by Erwin and Toomey (2005) related to admissions and CACREP-accredited programs performing CBCs. The 13 questions from the original Erwin and Toomey survey were used as a foundation for 30 questions that were created for the online survey utilized to gain information from CACREP-accredited program contacts. Participants were asked to identify if their programs required CBCs as part of admission to their program. Participants who responded in the affirmative then responded to six multiple choice items related to which specialty tracks required a CBC, type of CBC, who performs and pays for the CBC, how applicants are notified that the CBC is required, and whether the programs have established procedures for deciding non-admission based upon the results of the CBC. Further, two qualitative questions provided an opportunity to learn how CBC information is obtained and used.
Next, participants were asked to identify if their programs required CBCs of current students. Participants who responded in the affirmative then responded to seven multiple choice items related to which specialty tracks required the CBC, type of CBC, who performs and pays for the CBC, how applicants are notified that a CBC is required, at what time in the program CBCs are performed, and whether the programs have established procedures based upon the results of the CBC. Further, two qualitative questions requested information about how CBC information is used and protocols for removal of students. The final part of the survey consisted of 11 questions regarding ethical and legal issues (i.e., CBC required for certification, licensure, or employment as a professional counselor, privacy issues, client welfare, legal consequences of performing CBC, CACREP-standards, potential for screening out minority applicants and students). This section contained five multiple choice questions and six questions based on a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree).
To establish content validity and reliability, a pilot study of the survey was completed. The pilot study included two former CACREP-accredited program contacts who were asked to look for clarity and conciseness of the survey questions and provide feedback and suggestions for improvement. Based upon the responses of the pilot participants, the survey was edited to provide a more conducive and efficient design.
Data Analysis
The Statistical Package for Social Sciences (SPSS) software (version 21) was utilized to screen and analyze the data. The participants’ responses to the survey questions were subjected to both descriptive and correlational analyses. First, a descriptive analysis of multiple choice responses was conducted to produce a set of summary statistics related to each of the seven research questions. Next, a Fisher’s Exact Test (a variant of a chi-square test for independence for small sample sizes) with an alpha level of .05 was used to determine if there was an association between the region of the country where participants’ programs were located and whether CBCs are required for applicants or current students.
Results
Applicants and Criminal Background Checks
Regarding the first research question, of the 83 participants, 27.7% (n = 23) reported that their programs required applicants to undertake CBCs. Table 1 provides a breakdown of the specialty track that program contacts specified as requiring applicants to undergo CBCs. The Fisher’s Exact Test to determine an association between location of program and requiring applicants to have a CBC was found to be not significant (p = .426).
Table 1
Number and Percentages by Specialty Track and Criminal Background Required
Specialty Track Offered
by Program Criminal Background Required for Program Admission Criminal Background Required for Current Students in Program
Yes No Yes No Yes No
n % N % N % N % n % n %
Clinical Mental Health/
Community 70 84.3 13 15.7 16 22.9 54 77.1 26 37.1 44 62.9
School 69 83.1 14 16.9 15 21.7 54 78.3 33 47.8 36 52.2
Marriage, Couple, Family 14 16.9 69 83.1 2 14.3 12 85.7 9 64.3 5 35.7
Student Affairs/College 11 13.3 72 86.7 3 27.3 8 72.7 5 45.5 6 54.5
Addiction 5 6.0 78 94.0 1 4.5 4 95.5 1 4.5 4 95.5
Career 3 3.6 80 96.4 0 0.0 3 100 0 0.0 3 100
Procedures for applicants. Table 2 provides a breakdown of the type of CBCs performed, who performs the applicants’ CBCs, and who paid for the applicants’ CBCs. All programs that required
CBCs informed students of the CBC through at least one avenue: 45% (n = 10) reported notice was given only via the program’s Web site; 18% (n = 4) said they gave notice via program Web site, verbal discussion (i.e., interview), and written correspondence (i.e., e-mail, letter, handbook); 14% (n = 3) stated they gave notice by written correspondence only; 9% (n = 2) gave notice by verbal discussion only; 9% (n = 2) gave notice by both program Web site and written correspondence; and 5% (n = 1) gave notice via both verbal and written notification. An open-ended format was used to learn about how programs use information from the applicants’ CBCs. Thirty-five percent (n = 8) of the participants shared that they used results in different ways depending on if there was a criminal offense, the level of offense, and the date of offense. One participant reported their program uses the results to determine fit for their program and the counseling profession:
The nature of the crime and the time that has passed since then, and the applicant’s explanation (is it sincere, logical, etc.) will help faculty determine if the person will be considered or not. Also, we think about whether or not this person is likely to get certified as a school counselor or licensed as an LPC, or will be able to obtain liability insurance is all considered.
Established protocols for applicants. Regarding research question three, 59% (n = 13) of the 23 CACREP-accredited programs who reported requiring applicants to undergo CBCs had established procedures for deciding about the non-admission of an applicant in their program based on the CBC results. Twenty-three percent (n = 5) provided that their program had not established procedures and 18% (n = 4) reported that they did not know if their program had a recognized policy. Thirty-nine percent (n = 9) of the participants shared that they used professional standards for deciding about the non-admission of an applicant. One participant described, “We would not accept an applicant who had a background inconsistent with our discipline, and we would not accept an applicant who would not be able to obtain a license.”
Table 2
Number and Percentages by CBC Procedures and Applicants and Current Students
Applicants Current Students
n % n %
Type of CBC Performed
Local (i.e., city, county), state, and federal 10 45 14 37
State 3 14 5 14
Federal 3 14 6 16
State and federal 1 4 3 8
Cities of residency over last 7 years and sex offender data base 2 9 0 0
Did not know 3 14 6 16
Who Performed CBC
Outside private independent agency 8 36 7 19
Program’s university/college 7 32 6 16
Government agency 6 27 19 52
Multiple entities (i.e., state, federal,
private agency) 0 0 2 5
Did not know 1 4 3 8
Who Paid for CBC
Separate fee to applicant/student 17 77 33 89
Applicant paid as part of their
application fee 2 9 0 0
University/college paid 2 9 2 5
No charge, university police
department conducts 0 0 1 3
Did not know 1 4 1 3
Current Students and Criminal Background Checks
Regarding research question four, of the 83 participants, 45% (n = 37) reported that their programs
required current students to undertake CBCs. Table 1 provides a breakdown of the specialty track(s) that program contacts reported requiring students to undergo CBCs. The Fisher’s Exact Tests to determine an association between location of program and requiring applicants to have a CBC was found to be not significant (p = .500).
Procedures for current students. Table 2 provides a breakdown of the type of CBCs performed, who performs the current students’ CBCs, and who paid for the students’ CBCs. Further, two participants (5%) defined specific CBCs for certain specialty tracks: (a) state for all tracks plus federal for school students (3%, n = 1); and (b), state for college and marriage and family tracks, and state and federal for school students (3%, n = 1).
When asked when students’ CBCs are conducted, 35% (n = 13) reported it was before students are enrolled in internship, 27% (n = 10) reported during students’ first year, 19% (n = 7) reported before practicum, 8% (n = 3) reported before practicum and renewed for internship if the initial clearance was more than one year old, 5% (n = 2) reported during students’ second year, 3% (n = 1) reported at admission and then every two years after that, and 3% (n = 1) reported that CBCs are done every semester a student is enrolled in prepracticum, practicum, and internship. Participants reported various ways of letting students know that CBCs are a part of the program requirement. Twenty-seven percent (n = 10) reported that notice is given via the program’s handbook; 24% (n = 9) give it through orientation (i.e., new student, clinical), written correspondence (i.e., e-mail, letter), handbooks (i.e., program, clinical), and program Web site; 19% (n = 7) give it only through a verbal discussion (i.e., orientation, interview); 14% (n = 5) by give it by program’s Web site only; 11% (n = 4) through multiple methods of orientation (i.e., new student, clinical), written correspondence (i.e., e-mail, letter), handbooks (i.e., program, clinical), program Web sites and written correspondence; and 5% (n = 2) only via written correspondence (i.e., e-mail, letter, application).
Established protocols for current students. Sixty-eight percent (n = 25) of the 37 CACREP-accredited programs who reported requiring students to undergo CBCs had established protocols for deciding what action to take toward a student based on the CBC results. Twenty-seven percent
(n = 10) provided that their program had not established a procedure and 5% (n = 2) reported that they did not know if their program had a recognized policy. Although 25 participants reported that their programs had established procedures, a few responses suggested processes might be informal. For example, one participant stated, “Nothing formal. We hold informal conversations amongst faculty.”
Legal and Ethical Obligations
The following information was collected to answer the final research question. Of the 83 participants, the majority (64%, n = 53) reported that licensure or certification was dependent upon a successful CBC for students who graduate from their programs. Twenty percent (n = 17) of the respondents indicated that passing a CBC was not necessary for licensure or certification, leaving 16% (n = 13) who did not know if licensure or certification was contingent on having a successful CBC. The majority (89%, n = 74) believed that it was the program’s obligation to notify students that CBCs can be required as part of certification, licensure or employment as a professional counselor; however 5% (n = 4) believed it was not the program’s responsibility and 6% (n = 5) provided they did not know. Eighty-seven percent (n = 72) reported that their programs notified students that a CBC may be required to obtain certification, licensure or employment, leaving 13% (n = 11) of the programs saying they did not notify their students. When program contacts (n = 72) were asked how students are notified of this, 34% (n = 25) stated during orientation, 25% (n = 18) provided this information during the application process, 14% (n = 10) reported the information is continually given throughout the program (i.e., admission, orientations, before field placements), 10% (n = 7) stated the information was shared sometime during the first year of the program, 3% (n = 2) provided the information during field placement orientation for practicum and internship, 3% (n = 2) indicated information is given via student handbook, and 7% (n = 5) provided information was given via other means (i.e., during field placement discussions, when students apply for licensure due to licensure requirements varying by state).
When program contacts were asked if they believed it is ethical for their programs to perform CBCs on applicants or students, 41% (n = 34) believed it was ethical to perform CBCs on applicants and students, 29% (n = 24) felt it was not ethical for applicants or students, 19% (n = 16) responded it was ethical only for current students, and 4% (n = 2) said it was ethical only for applicants. Eight percent (n = 7) responded to this question by providing an alternate response.
All participants’ (n = 83) responses for strongly agree and agree were combined to report the subsequent findings. Sixty-six percent (n = 55) believed that counseling programs’ use of CBCs on applicants and students is important to ensure future clients’ welfare and safety. When asked if counseling programs completing CBCs on applicants and students violate the privacy rights of applicants and students, 17% (n = 14) either agreed or strongly agreed that it did not. Thirty-six percent (n = 30) believed that counseling programs can face legal consequences if CBCs are not conducted on applicants or students. Further, 24% (n = 20) responded that they believed that counseling programs can face legal consequences by performing CBCs on applicants or students. Thirty-three percent (n = 27) believed that there should be a CACREP standard regarding CBCs of applicants and students to ensure consistency and provide an established protocol. When asked if performing CBCs on applicants and students will result in a disproportionate screening-out of minority applicants and students, only 14% (n = 12) believed it would.
Discussion
There were two primary aims of this study: (1) to assess the current use of CBCs by CACREP-accredited master’s counseling programs and (2) to offer current information for programs to reference when considering the use of CBCs and creating relevant policies. Within the field of counseling, few studies have explored the use of CBCs and related policies (Erwin & Toomey, 2005; Swank & Smith-Adcock, 2014). As aforementioned, Erwin and Toomey conducted a study in 2005 with only 50 programs that responded. Additionally, only five of the programs that responded used CBCs, which limited the utility of their findings. Swank and Smith-Adcock (2014) surveyed counselor educators about the effectiveness of their current screening procedures for applicants. Their participants reported wanting to use more reliable and objective methods such as background checks, but were unsure how to do so with minimal guidance in the literature.
In the present study, 27.7% (n = 23) of respondents reported requiring applicants to undertake CBCs. Although this may seem like a small portion of the sample, it still offers the field knowledge that can augment findings by Erwin and Toomey (2005). This result is not surprising given that there are so few guidelines for programs to use when considering CBCs as a screening and gatekeeping tool. The use of CBCs also remains underdeveloped in other fields such as nursing, medicine and sports science (Farnsworth & Springer, 2006; Kleshinski et al., 2011; Weuve et al., 2008). In fact, Farnsworth and Springer (2006) reported that fewer than 50% of the medical programs they surveyed reported using CBCs. They found this extremely concerning as the field of nursing requires all graduates to pass a CBC in order to become licensed. This is a related issue for those wanting to become a licensed mental health counselor as 17 states report requiring an applicant to pass a CBC in order to become licensed. All the states that do not require CBCs ask for the applicant to describe any criminal offenses on their application and provide further documentation when necessary.
Although 41% of the participants surveyed in the present study reported the use of CBCs as ethical, this finding did not correspond with actual use of CBCs (26.5%). One factor may be related to fear of potential liability when using CBCs. In a study conducted by Swank and Smith-Adcock (2014), participants, who are educators, stated that they would like to use background checks, but they felt hesitant due to the litigation that can come with such methods. These fears may be exacerbated by the fact that the use of CBCs is not universal across university programs and there may be little knowledge about how to seek out university lawyers when developing these requirements. At this time, most university guidelines around CBCs focus on use with employees (Swank & Smith-Adcock, 2014). Weuve et al. (2008) described that lack of guidance and misuse of results continues to keep graduate programs from using CBCs. In the present study, only 13 of the 23 programs who reported using CBCs had an established procedure for how to use the results. Ultimately, since few resources are available to assist in these decision-making processes, it would be important for programs to seek university counsel. For example, it would be important to seek legal counsel when deciding how requirements and standards should read on program Web sites, how to use the results, and how to inform students about the use of the CBC results.
It also is important to consider other related liability issues such as faculty subjectivity. Previous research indicated faculty subjectivity may interfere with gatekeeping fidelity (Brear & Dorrian, 2010). In the current study, only 13 participants reported their program had an established procedure for deciding about the non-admission of applicants based on CBC results. When procedures are not in place, there may be a greater potential for phenomena such as the empathy veil effect, leniency effect or likability effect. Such phenomena may prompt some faculty to look the other way if not held accountable to exercise a specific policy.
This research also has implications for counseling students. Given that not all programs execute CBCs, students may not understand the consequences of their legal violations until seeking licensure. Currently, 17 state licensing boards require CBCs and all states ask applicants to attest to criminal violations (ACA, 2010). There is potential for a student to get through his or her training program and be ineligible for licensure due to their criminal background. A need exists to consider how CBCs may be used to help students gatekeep themselves and be more conscious of barriers that may ultimately interfere with their professional goals.
Limitations and Areas for Future Research
This study has five basic limitations. First, the sample was obtained from program contacts of CACREP-accredited master’s counselor education programs. This approach omitted programs that were not CACREP-accredited. Therefore, generalizability of the results is limited to CACREP-accredited programs. Further, this study did not delineate whether the programs were housed in private or public institutions. Future research focused on investigating all professional counseling programs would be beneficial. The third limitation is that volunteers may have answered the survey questions differently than those members of the population who did not agree to participate (70%). The fourth limitation is associated with the survey being a self-report measure; some participants may have provided responses considered to be socially desirable. Even though the participants were informed in advance that their responses would be kept anonymous, they may have responded in a manner that was not representative of their true feelings or knowledge. The final limitation is related to instrumentation. The findings could have been expanded upon by including questions on the survey about consequences programs have experienced when using or not using CBCs. For example, have any programs been sued for using or not using CBCs?
Given the minimal amount of research in this area, there are multiple directions for future research. One suggestion is to qualitatively explore programs that have used CBCs for several years to get a more thorough understanding of how their processes have evolved. This may help programs understand the elements to consider when using CBCs as part of the screening and gatekeeping processes. It also may support programs in understanding how to protect themselves from liability concerns related to using CBCs. Another future study may involve surveying doctoral-level counseling programs to examine differences across training levels. Further research could examine student perspectives of the use of CBCs. It might be possible that students would welcome the use of CBCs at the program level so they are aware of legal standards at the start of pursuing a professional counselor license.
Conclusion
Since screening and gatekeeping is such an important role of a training program, the use of CBCs is an important topic for counselor education. The use of CBCs may assist counselor educators in executing their ethics related to not endorsing CITs they believe to be unqualified (ACA, 2014). The consequences of graduating a student with a criminal history could be great and ultimately put future clients at risk for harm. Perhaps CACREP could assist programs in understanding if and how to use CBCs by adding ideas for best practices in their accreditation standards. Previous literature has indicated that the field of counseling may benefit from creating more formalized screening procedures that include objective and reliable measures (Swank & Smith-Adcock, 2014). The current study offers support that programs are using CBCs as a part of the admission process and to continually evaluate their students. Given this is a trend, it may be important to establish best practices and policies around CBCs so that programs are using them in consistent ways.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
American Counseling Association. (2010). Licensure requirements for professional counselors – 2010. Retrieved from http://www.counseling.org/docs/licensure/72903_excerpt_for_web.pdf?sfvrsn=2
American Counseling Association. (2014). 2014 ACA code of ethics. Alexandria, VA: Author.
American Counseling Association, Office of Public Policy and Legislation. (2011). A guide to state laws and regulations on professional school counseling. Retrieved from https://www.counseling.org/docs/licensure/schoolcounselingregs2011.pdf?sfvrsn=2
Brear, P., & Dorrian, J. (2010). Gatekeeping or gate slippage? A national survey of counseling educators in Australian undergraduate and postgraduate academic training programs. Training and Education in Professional Psychology, 4, 264–273. doi:10.1037/a0020714
Brodersen, M., Swick, D., & Richman, J. (2009). Risks and mitigating factors in decisions to accept students with criminal records. Journal of Social Work Education, 45, 349–363. doi:10.5175/JSWE.2009.200800081
Brown-Rice, K., & Furr, S. (2014). Lifting the empathy veil: Engaging in competent gatekeeping. In Ideas and research you can use: VISTAS 2012. Retrieved from https://www.counseling.org/docs/default-source/vistas/article_11.pdf?sfvrsn=12
Burns, K., Frank-Stromborg, M., Teytelman, Y., & Herren, J. D. (2004). Criminal background checks: Necessary admission criteria? Journal of Nursing Education, 43, 125–129.
Cowburn, M., & Nelson, P. (2008). Safe recruitment, social justice, and ethical practice: Should people who have criminal convictions be allowed to train as social workers? Social Work Education, 27, 293–306. doi:10.1080/02615470701380394
Erwin, W. J., & Toomey, M. E. (2005). Use of criminal background checks in counselor education. Counselor Education and Supervision, 44, 305–318. doi:10.1002/j.1556-6978.2005.tb01758.x
Farnsworth, J., & Springer, P. J., (2006). Background checks for nursing students: What are schools doing? Nursing Education Perspectives, 27, 148–153.
Gay, L. R., Mills, G. E., & Airasian, P. (2009). Educational research: Competencies for analysis and applications (9th ed.). Upper Saddle River, NJ: Pearson Education.
Kerl, S., & Eichler, M. (2005). The loss of innocence: Emotional costs to serving as gatekeepers to the counseling profession. Journal of Creativity in Mental Health, 1, 71–88. doi:10.1300/J456v01n03_05
Kleshinski, J., Case, S. T., Davis, D., Heinrich, G. F., & Witzburg, R. A. (2011). Commentary: Criminal background checks for entering medical students: History, current issues, and future considerations. Academic Medicine, 86, 795–798. doi:10.1097/ACM.0B013e31821db0ab
North Carolina Board of Licensed Professional Counselors. (2013). Licensed Professional Counselor. Retrieved from http://www.ncblpc.org/application-info/lpc
Sheets, V., & Kappel, D. M. (2007). The case for criminal background screening: Informed licensure decision
making. JONA’s Healthcare Law, Ethics, and Regulation, 9(2), 64–67.
doi:10.1097/01.NHL.0000277201.16864.71
Sterner, W. R. (2011). What is missing in counseling research? Reporting missing data. Journal of Counseling & Development, 89, 56–62. doi:10.1002/j.1556-6678.2011.tb00060.x
Swank, J. M., & Smith-Adcock, S. (2014). Gatekeeping during admissions: A survey of counselor education programs. Counselor Education and Supervision, 53, 47–61. doi:10.10002/j.1556-6978.2014.000048.x
University of Texas at Austin, Division of Instructional Innovation and Assessment. (2011). Conduct research: Response rates. Retrieved from https://facultyinnovate.utexas.edu/sites/default/files/response_rates.pdf
Weuve, C., Martin, M., & White, G. (2008). Criminal background checks part 2: Implications for education. Athletic Therapy Today, 13(5), 27–30.
Maribeth F. Jorgensen, NCC, is an Assistant Professor at the University of South Dakota. Kathleen Brown-Rice, NCC, is an Assistant Professor at the University of South Dakota. Correspondence can be addressed to Maribeth Jorgensen, 414 East Clark Street, Vermillion, SD 57069, maribeth.jorgensen@usd.edu.
Sep 16, 2016 | Article, Volume 6 - Issue 3
Christopher A. Sink
With the advent of a multi-tiered system of supports (MTSS) in schools, counselor preparation programs are once again challenged to further extend the education and training of pre-service and in-service school counselors. To introduce and contextualize this special issue, an MTSS’s intent and foci, as well as its theoretical and research underpinnings, are elucidated. Next, this article aligns MTSS with current professional school counselor standards of the American School Counselor Association’s (ASCA) School Counselor Competencies, the 2016 Council for Accreditation of Counseling and Related Educational Programs (CACREP) Standards for School Counselors and the ASCA National Model. Using Positive Behavioral Interventions and Supports (PBIS) and Response to Intervention (RTI) models as exemplars, recommendations for integrating MTSS into school counselor preparation curriculum and pedagogy are discussed.
Keywords:multi-tiered system of supports, school counselor, counselor education, American School Counselor Association, Positive Behavioral Interventions and Supports, Response to Intervention
When new educational models are introduced into the school system that affect school counseling practice, the training of pre-service and in-service school counselors needs to be updated. A multi-tiered system of supports (MTSS) is one such innovation requiring school counselors to further refine their skill set. In fact, during the school counseling profession’s relatively short history, counselors have experienced several major shifts in foci and best practices (Gysbers & Henderson, 2012). The latest movement surfaced in the 1980s, when school counselors were encouraged to revisit their largely reactive, inefficient and ineffective practices. Specifically, rather than supporting a relatively small proportion of students with their vocational, educational and personal-social goals and concerns, pre-service and in-school practitioners, under the aegis of a comprehensive school counseling program (CSCP) orientation, were called to operate in a more proactive and preventative fashion.
Although there are complementary frameworks to choose from, the American School Counselor Association’s (ASCA; 2012a) National Model: A Framework for School Counseling Programs emerged as the standard for professional practice, offering K–12 counselors an operational scaffold to guide their activities, interventions and services. Preliminary survey research suggests that counselors are performing their duties in a more systemic and collaborative fashion to more effectively serve students and their families (Goodman-Scott, 2013, 2015). Other rigorous accountability research examining the efficacy of CSCP practices supports this transformation of counselors’ roles and functions (Martin & Carey, 2014; Sink, Cooney, & Adkins, in press; Wilkerson, Pérusse, & Hughes, 2013). As a consequence of the increased demand for retraining, university-level counselor preparation programs and professional counseling organizations (e.g., American Counseling Association, ASCA, National Board for Certified Counselors) have generally responded in kind. Over the last few decades, K–12 school counselors have been instructed to move from a positional approach to their professional work to one that is programmatic and systemic in nature.
As mentioned above, the implementation of MTSS (e.g., Positive Behavioral Supports and Responses [PBIS] and Response to Intervention [RTI] frameworks) in the nation’s schools requires in-service counselors to augment their collaboration and coordination skills (Shepard, Shahidullah, & Carlson, 2013). Essentially, MTSS programs are evidence-based, holistic, and systemic approaches to improve student learning and social-emotional-behavioral functioning. They are largely implemented in educational settings using three tiers or levels of intervention. In theory, all educators are involved at differing levels of intensity. For example, classroom teachers and teacher aides are the first line (Tier 1) of support for struggling students. As the need might arise, other more “specialized” staff (e.g., school psychologists, special education teachers, school counselors, addictions counselors) may be enlisted to provide additional and more targeted student interventions and support (Tiers 2 or 3). Even though ASCA (2014) released a position statement broadly addressing school counselors’ roles and functions within MTSS schools, research is equivocal as to whether these practitioners are implementing these directives with any depth and fidelity (Goodman-Scott, 2015; Goodman-Scott, Betters-Bubon, & Donahue, 2016; Ockerman, Mason, & Hollenbeck, 2012; Ockerman, Patrikakou, & Feiker Hollenbeck, 2015). Moreover, school counselor effectiveness with MTSS-related responsibilities is an open question.
To sufficiently answer these accountability questions, there is a pressing need for university preparation programs to better educate nascent school counselors on MTSS, particularly on the fundamentals and effective ways PBIS and RTI can be accommodated within the purposes and practices of CSCPs (Goodman-Scott et al., 2016). While educational resources and research are plentiful, they are chiefly aimed at pre-service and in-service teachers and support staff working closely with special education students, such as school psychologists (Forman & Crystal, 2015; Owen, 2012; Turnbull, Bohanon, Griggs, Wickham, & Salior, 2002). Albeit informative, nearly all school counselor MTSS research and application publications are focused on in-service practitioners (ASCA, 2014; de Barona & Barona, 2006; Donohue, 2014; Goodman-Scott, 2013; Martens & Andreen, 2013; Ockerman et al., 2012; Ryan, Kaffenberger, & Carroll, 2011; Shepard et al., 2013; Zambrano, Castro-Villarreal, & Sullivan, 2012). With perhaps the exception of Goodman-Scott et al. (2016), who provided a useful alignment of the ASCA National Model (2012a) with PBIS practices, there are few evidence-based resources for school counselor educators to draw upon in order to rework their pre-service courses to include MTSS curriculum and instruction. To successfully prepare counselors to work within PBIS or RTI schools, students must understand the ways MTSS foci are aligned with professional counseling standards for practice. Such a document is noticeably absent from the literature.
The primary intent of this article is to offer school counselor educators functional and literature-based recommendations to enhance their MTSS training of pre-service counselors. To do so, MTSS programs are first contextualized by summarizing their major foci, operationalization, theoretical underpinnings and research support. Next, the objectives of MTSS models are aligned with the ASCA (2012b) School Counselor Competencies and the 2016 CACREP Standards for School Counselors. Finally, using PBIS and RTI models as exemplars, recommendations for school counselor preparation curriculum and pedagogy are offered.
Foundational Considerations
Since MTSS programs are extensively described in numerous publications (e.g., Bradley, Danielson, & Doolittle, 2007; Carter & Van Norman, 2010; Forman & Crystal, 2015; R. Freeman, Miller, & Newcomer, 2015; Fuchs & Fuchs, 2006; Horner, Sugai, & Lewis, 2015; McIntosh, Filter, Bennett, Ryan, & Sugai, 2010; Sandomierski, Kincaid, & Algozzine, 2007; Sugai & Simonsen, 2012), including articles in this special issue, there is little need to reiterate the details here. However, for those school counselor educators and practitioners who are less conversant with MTSS’s theoretical grounding, research evidence and operational characteristics supporting implementation, these topics are overviewed.
MTSS programs by definition are comprehensive and schoolwide in design, accentuating the importance of graduated levels of student support. In other words, the amount of instructional and behavioral support gradually increases as the student’s assessed needs become more serious. Although the most prominent and well-researched MTSS approaches, PBIS and RTI, are considered disparate frameworks to address student deficits (Schulte, 2016), the extent of their overlap in theoretical principles, foci, processes and practices allows for an abbreviated synthesis (R. Freeman, et al., 2015; Sandomierski et al., 2007; Stoiber & Gettinger, 2016).
Initially, RTI and PBIS programming and services emerged from special education literature and best practices. Over time these evidence-based approaches extended their reach, and the entire student population is now served. Specifically, PBIS aims to increase students’ prosocial behaviors and decrease their problem behaviors as well as promote positive and safe school climates, benefitting all learners (Bradley et al., 2007; Carter & Van Norman, 2010; Klingner & Edwards, 2006). Although RTI programs also address students’ behavioral issues, they largely focus on improving the academic development and performance of all children and youth through high-quality instruction (Turse & Albrecht, 2015; Warren & Robinson, 2015). RTI staff are particularly concerned with those students who are academically underperforming (Greenwood et al., 2011; Johnsen, Parker, & Farah, 2015; Ockerman et al., 2015; Sprague et al., 2013). Curiously, the potential roles and functions of school counselors within these programs were not delineated until many years after they were first introduced (Warren & Robinson, 2015). Even at this juncture, often cited MTSS publications neglect discussing school counselors’ contributions to full and effective implementation (Carter & Van Norman, 2010). Instead they frequently refer to behavior specialists as key members of the MTSS team (Horner, Sugai, & Anderson, 2010).
MTSS Theory and Research
PBIS and RTI model authors and scholars consistently implicate a range of conceptual orientations, including behaviorism, organizational behavior management, scientific problem-solving, systems thinking and implementation science (Eber, Weist, & Barrett, n.d.; Forman & Crystal, 2015; Horner et al., 2010; Kozleski & Huber, 2010; Sugai & Simonsen, 2012; Sugai et al., 2000; Turnbull et al., 2002). It appears, however, that behavioral principles and systems theory are most often credited as MTSS cornerstones (Reschly & Cooloong-Chaffin, 2016). Since PBIS and RTI are essentially special education frameworks, it is not surprising that behaviorist constructs and applications (e.g., reinforcement, applied experimental behavior analysis, behavior management and planning, progress monitoring) are regularly cited (Stoiber & Gettinger, 2016). Furthermore, MTSS frameworks are in concept and practice system-wide structures (i.e., student-centered services, processes and procedures that are instituted across a school or district), and as such, holistic terminology consistent with Bronfrenbrenner’s bioecological systems theory and other related systems orientations (e.g., Bertalanffy general systems theory and Henggeler and colleagues’ multi-systemic treatment approach) are commonly cited (see Reschly & Cooloong-Chaffin, 2016, and Shepard et al., 2013, for examples of extensive discussions).
MTSS research largely demonstrates the efficacy of PBIS and RTI models. For instance, Horner et al. (2015) conducted an extensive analysis of numerous K–12 PBIS studies, concluding that this systems approach is evidence-based. Other related literature reviews indicated that PBIS frameworks are at least modestly serviceable in preschools (Carter & Van Norman, 2010), K–12 schools (Horner et al., 2010; Molloy, Moore, Trail, Van Epps, & Hopfer, 2013), and juvenile justice settings (Jolivette & Nelson, 2010; Sprague et al., 2013). Across most studies, PBIS programming yields weak to moderately positive outcomes for PK–12 students from diverse backgrounds (e.g., African American and Latino) and varying social and academic skill levels (Childs, Kincaid, George, & Gage, 2015; J. Freeman et al., 2015, 2016). Similarly, evaluations of RTI interventions are promising for underachieving learners (Bradley et al., 2007; Fuchs & Fuchs, 2006; Greenwood et al., 2011; Proctor, Graves, & Esch, 2012; Ryan et al., 2011). Students tend to especially benefit from Tier 2 and 3 interventions. In their entirety, PBIS and RTI models are modestly successful frameworks to identify students at risk for school-related problems and ameliorate social-behavioral and academic deficiencies. It should be noted, however, that the long-term impact of MTSS on students’ social-emotional outcomes remains equivocal (Saeki et al., 2011). As mentioned previously, there is a paucity of evidence demonstrating that school counselors indirectly or directly contribute to positive MTSS outcomes. As with any relatively new educational innovation, research is needed to further clarify the specific impacts of MTSS on student, family, classroom and school outcome variables. The next section summarizes the ways MTSS frameworks are viewed and instituted in school settings.
Operational Features
For school counselors to be effective MTSS leaders and educational partners, they must understand the conceptual underpinnings and operational components and functions of PBIS and RTI frameworks. Given the introductory nature of this article, we limit our discussion to essential characteristics of these frameworks. Extensive practical explanations of MTSS models abound in the education (R. Freeman et al. 2015; Preston, Wood, & Stecker, 2016; Turse & Albrecht 2015) and school counseling literature (Goodman-Scott et al., 2016; Ockerman et al., 2012, 2015). To reiterate, MTSS frameworks are designed to be systems or ecological approaches to assisting students with their educational development and improving academic and behavioral outcomes. As described below, they attempt to serve all students through graduated layers of more intensive interventions. School counselors deliver, for example, evidence-based services to students, ranging from classroom and large group interventions to those provided to individual students in the counseling office (Forman & Crystal, 2015). By utilizing systematic problem-solving strategies and behavioral analysis tools to guide effective practice (Sandomierski et al., 2007), students who are most at risk for school failure and behavioral challenges are provided with more individualized interventions (Horner et al., 2015).
Practically speaking, MTSS processes and procedures vary from school to school, district to district. To understand how these frameworks are operationalized, there are numerous online school-based case studies to review. For instance, at the PBIS.org Web site, Ross (n.d.), the principal at McNabb Elementary (KY), overviewed the ways a PBIS framework was effectively implemented at his school. Most importantly, the reach of PBIS programming was expanded to all students, requiring a higher level of educator collaboration and “buy in.” Other pivotal changes were made, including (a) faculty and staff visits to students’ homes (i.e., making closer “positive connections”); (b) the implementation of summer programs for student behavioral and academic skill enrichment; (c) additional school community engagement activities (e.g., movie nights, Black History Month Extravaganza); and, (d) further PBIS training to improve school discipline and classroom management strategies. Other MTSS schools stress the importance of carefully identifying students in need of supplemental services and interventions using research-based assessment procedures (e.g., functional behavioral analysis or functional behavioral assessment [FBA]). Most schools emphasize these key elements to successful schoolwide PBIS implementation: (a) data-based decision making, (b) a clear and measurable set of behavioral expectations for students, (c) ongoing instruction on behavioral expectations, and (d) consistent reinforcement of appropriate behavior (PBIS.org, 2016).
Furthermore, MTSS frameworks, such as PBIS and RTI, have two main functions. First, they offer an array of activities and services (prevention- and intervention-oriented) that are systematically introduced to students based on an established level of need. Second, educators carefully consider the learning milieu, particularly as it may influence the development and improvement of student behavior (social and emotional learning [SEL] and academic performances). MTSS staff must be well educated on the signs of student distress, including those indicators that suggest students are at risk for school-related difficulties (e.g., below grade level academic achievement, social and emotional challenges, mental health disorders, long-term school failure). Moreover, educators should be provided appropriate training on various assessment tools to determine which set of students require more intensive care.
Within a triadic support system, all students (Tier 1: primary or universal prevention) are at least monitored and assisted by classroom staff. Teachers are encouraged to document student progress (or lack thereof) toward academic and behavioral goals. At the first level, school counselors partner with other building educators to conduct classroom activities and guidance to promote academic success, SEL (e.g., prosocial behaviors), and appropriate school behavior (Donohue, 2014). Counselors also may assist with setting behavioral expectations for students, suggest differentiated instruction for academic issues, collect data for program decision making, and conduct universal screening of students in need of additional behavior support (Horner et al., 2015). In short, the aim of Tier 1 is to (a) support all student learning and (b) proactively recognize individuals displaying the warning signs of learning or social and behavioral challenges.
Once the signals of educational or behavioral distress become more pronounced, relevant staff may initiate a formal MTSS process. For example, in many states and school districts, within the context of an MTSS, the struggling learner becomes a “focus of concern” and a multidisciplinary or school support team is convened (Kansas MTSS, 2011). Panel members are generally comprised of the school psychologist, administrator, counselor and relevant teachers. Counselors may be asked to collaborate with other educators to appraise the student’s learning environments. If potential hindrances are detected, these must be sufficiently attended to before further educational intervention is provided. Once the determination is made that the “targeted” learner received high-quality academic and behavioral instruction, and yet continues to exhibit deficiencies, the student is considered for Tier 2 services (Horner et al., 2015). School counselor tasks at this level may include providing evidence-based classroom interventions, short-term individual or group counseling, progress monitoring and regular school–home communication. Other sample interventions might involve the application of a behavior modification plan, the assignment of a peer mentor and tutoring system, and the utilization of “Check and Connect” (Maynard, Kjellstrand, & Thompson, 2013) or Student Success Skills (Lemberger, Selig, Bowers & Rogers, 2015) programs.
In most cases, identified students make at least modest progress at Tier 2 and do not require tertiary intervention. Even so, a small percentage of students receive Tier 3 services involving, for example, a comprehensive FBA, additional linking of academic and behavioral supports, and more specialized attention (Horner et al., 2015). School counselor support at this level commonly incorporates and extends beyond Tier 2 services. Ongoing consultation with and referrals to community-based professionals (e.g., learning experts, marriage and family counselors, child psychiatrists, and clinical psychologists) and out- or in-patient treatment facilities may be necessary.
In summary, the essential focus of collaborative MTSS programming is to improve student performance by first carefully assessing student strengths and weaknesses. Once these characteristics are identified, the MTSS team, with input from the school counseling staff, develops learning outcomes and, as required, may institute whole-school, classroom, or individual activities and services to best address lingering student deficiencies. As such, counselors should be significant partners with other appropriate staff to deliver the needed assistance and support (e.g., assign a peer mentor, provide individual or group counseling, institute a behavior management plan) to address students’ underdeveloped academic or social-emotional and behavioral skills. To close the MTSS loop, follow-up assessment of student progress toward designated learning and behavioral targets is regularly conducted by teachers with assistance from counselors and other related specialists. Based on the evaluation results, further interventions may be prescribed. School counselors therefore contribute essential MTSS services at each tier, promoting through their classroom work, group counseling and individualized services a higher level of student functioning. Regrettably, anecdotal evidence and survey research suggest that many are ill-equipped to conduct the requisite prevention and intervention activities (Ockerman et al., 2015). The following sections attempt, in part, to rectify this situation.
Alignment of MTSS With Professional School Counselor Standards and Practice
Before considering the implications for pre-service school counselor preparation, school counselors and university-level counselor educators should benefit from understanding the ways in which MTSS school counselor-related roles and functions are consistent with the preponderance of the ASCA (2012b) School Counselor Competencies and CACREP (2016) School Counseling Standards. Because there are so few publications documenting school counselor roles and functions within MTSS frameworks, a standards crosswalk, or matrix, was developed to fill this need (see Table 1). It should be noted that the ASCA standards and CACREP competencies are largely consistent with the National Board for Professional Teaching Standards’ (National Board; 2012) School Counseling Standards for School Counselors of Students Ages 3–18+. As such, they were not included in the table.
Table 1
Crosswalk of Sample School Counselor MTSS Roles and Functions, ASCA (2012b) School Counselor Competencies, and CACREP (2016) School Counseling Standards
| MTSS School Counselor Roles and Functions* |
ASCA School Counselor
Competencies
|
CACREP Section 5: Entry-Level Specialty Areas – School Counseling
|
|
I. School Counseling Programs
B: Abilities & Skills
|
1. Foundations 2. Contextual Dimensions
3. Practice
|
Shows strong school
leadership |
I-B-1c. Applies the school counseling themes of leadership, advocacy, collaboration and systemic change, which are critical to a successful school counseling program |
2.d. school counselor roles in school leadership and multidisciplinary teams |
|
I-B-2. Serves as a leader in the school and community to promote and support student success |
|
| Collaborates and consults with relevant stakeholders |
I-B-4. Collaborates with parents, teachers, administrators, community leaders and other stakeholders to promote and support student success |
3.l. techniques to foster collaboration and teamwork within schools |
Collaborates as needed to provide integration of
services |
I-B-4b. Identifies and applies models of collaboration for effective use in a school counseling program and understands the similarities and differences between consultation, collaboration and counseling and coordination strategies |
1.d. models of school-based collaboration and consultation |
|
I-B-4d. Understands and knows how to apply a consensus-building process to foster agreement in a group |
Provides staff development related to positive
discipline, behavior and mental health |
I-B-4e. Understands how to facilitate group meetings to effectively and efficiently meet group goals |
| Leads with systems change to provide safe school |
I-B-5. Acts as a systems change agent to create an environment promoting and supporting student success |
2.a. school counselor roles as leaders, advocates and systems change agents in PK–12 schools |
Intervention planning for SEL and academic skill
improvementProvides risk and threat
assessments |
I-B-5b. Develops a plan to deal with personal (emotional and cognitive) and institutional resistance impeding the change process |
2.g. characteristics, risk factors, and warning signs of students at risk for mental health and behavioral disorders;2.h. common medications that affect learning, behavior and mood in children and adolescents;2.i. signs and symptoms of substance abuse in children and adolescents as well as the signs and symptoms of living in a home where substance use occurs;3.h. skills to critically examine the connections between social, familial, emotional and behavior problems and academic achievement |
|
II. Foundations B: Abilities and Skills |
|
II-B-4. Applies the ethical standards and principles of the school counseling profession and adheres to the legal aspects of the role of the school counselor |
2.n. legal and ethical considerations specific to school counseling |
|
II-B-4c. Understands and practices in accordance with school district policy and local, state and federal statutory requirements |
2.m. legislation and government policy relevant to school counseling |
|
III. Management B: Abilities and Skills |
| Effective collection, evaluation, interpretation and use of data to improve availability of services |
III-B-3. Accesses or collects relevant data, including process, perception and outcome data, to monitor and improve student behavior and achievement |
1.e. assessments specific to PK–12 education |
| Assists with schoolwide data management for documentation and decision making |
III-B-3a. Reviews and disaggregates student achievement, attendance and behavior data to identify and implement interventions as needed |
| Collects needs assessment data to better inform culturally relevant practices |
III-B-3b. Uses data to identify policies, practices and procedures leading to successes, systemic barriers and areas of weakness |
|
III-B-3c. Uses student data to demonstrate a need for systemic change in areas such as course enrollment patterns; equity and access; and achievement, opportunity and/or information gaps |
3.k. strategies to promote equity in student achievement and college access |
|
III-B-3d. Understands and uses data to establish goals and activities to close the achievement, opportunity and/or information gap |
|
III-B-3e. Knows how to use data to identify gaps between and among different groups of students |
| Measures student progress of schoolwide interventions with pre/post testing |
III-B-3f. Uses school data to identify and assist individual students who do not perform at grade level and do not have opportunities and resources to be successful in school |
|
Promotes early intervention Designs and implements
interventions to meet the behavioral and mental health needs of students |
III-B-6a. Uses appropriate academic and behavioral data to develop school counseling core curriculum, small-group and closing-the-gap action plans and determines appropriate students for the target group or interventions |
3.c. core curriculum design, lesson plan development, classroom management strategies and differentiated instructional strategies |
|
III-B-6c. Creates lesson plans related to the school counseling core curriculum identifying what will be delivered, to whom it will be delivered, how it will be delivered and how student attainment of competencies will be evaluated |
Provides academic
interventions directly to students |
III-B-6d. Determines the intended impact on academics, attendance and behavior |
3.d. interventions to promote academic development |
|
III-B-6g. Identifies data collection strategies to gather process, perception and outcome data |
|
| Coordinates efforts and ensures proper communication between MTSS staff, students and family members |
III-B-6h. Shares results of action plans with staff, parents and community |
|
|
III-B-7b. Coordinates activities that establish, maintain and enhance the school counseling program as well as other educational programs |
|
|
IV. Delivery B: Abilities and Skills |
|
Provides specialized
instructional support |
IV-B-1d. Develops materials and instructional strategies to meet student needs and school goals |
3.c. core curriculum design, lesson plan development, classroom management strategies and differentiated instructional strategies |
|
IV-B-1g. Understands multicultural and pluralistic trends when developing and choosing school counseling core curriculum |
|
IV-B-1h. Understands and is able to build effective, high-quality peer helper programs |
3.m. strategies for implementing and coordinating peer intervention programs |
| Engages in case management to assist with social-emotional and academic concerns |
IV-B-2b. Develops strategies to implement individual student planning, such as strategies for appraisal, advisement, goal-setting, decision making, social skills, transition or post-secondary planning |
3.g. strategies to facilitate school and postsecondary transitions |
| Understands social skills development |
IV-B-2g. Understands methods for helping students monitor and direct their own learning and personal/social and career development |
3.f. techniques of personal/social counseling in school settings |
| Provides interventions at three levels |
IV-B-3. Provides responsive services |
|
IV-B-3c. Demonstrates an ability to provide counseling for students during times of transition, separation, heightened stress and critical change |
| Coordinating with community service providers and integrating intensive interventions into the schooling process |
IV-B-4a. Understands how to make referrals to appropriate professionals when necessary |
2.k. community resources and referral sources |
Train/present information to school staff on data
collection and analysis |
IV-B-5a. Shares strategies that support student achievement with parents, teachers, other educators and community organizations |
2.b. school counselor roles in consultation with families, PK–12 and postsecondary school personnel, and community agencies |
Implements appropriate
interventions at each tier |
IV-B-5b. Applies appropriate counseling approaches to promoting change among consultees within a consultation approach |
|
|
V. Accountability B: Abilities and Skills |
| Collects, analyzes, and interprets school-level data to improve availability and effectiveness of services and interventions Uses progress monitoring data to inform counseling interventions |
V-B-1g. Analyzes and interprets process, perception and outcome data |
3.n. use of accountability data to inform decision making3.o. use of data to advocate for programs and students |
| Understands history, rationale, and benefits of MTSS |
|
|
Note. *Primary sources: ASCA (2012b, 2014); CACREP (2016); Cowan, Vaillancourt, Rossen, & Pollitt, (2013);
Ockerman et al. (2015).
The MTSS School Counselor Roles and Functions column was generated from several sources, including a recent study examining school counselors’ RTI perspectives (Ockerman et al., 2015), ASCA’s (2014) RTI position statement, and a lengthy school psychology publication that specifically addresses school counselor roles in creating safe MTSS schools (Cowan, Vaillancourt, Rossen, & Pollitt, 2013). Essentially, the crosswalk reveals that K–12 school counselor MTSS roles and functions correspond substantially with the ASCA (2012b) School Counselor Competencies and CACREP (2016) Standards. Similarly, MTSS school counselor tasks fit well within the broad and longstanding role categories traditionally associated with counseling services: (a) coordination of CSCP services, interventions and activities; (b) collaboration with school staff and other stakeholders; (c) provision of responsive services (e.g., individual and group counseling, classroom interventions, peer helper and support services, crisis intervention); (d) consultation within school constituencies and external resource personnel; and (e) classroom lessons (i.e., MTSS Tier 1 services; Burnham & Jackson, 2000; Goodman-Scott et al., 2016; Gysbers & Henderson, 2012; Schmidt, 2014; Sink, 2005). Since the ASCA (2012a) National Model also is a systemic and structural model aimed at whole-school prevention and intervention of student issues, school counselor MTSS roles (direct and indirect services) also align reasonably well with the model’s components (e.g., foundation, management, delivery and accountability; Goodman-Scott et al., 2016). In short, including MTSS into the pre-service training of school counselors is professionally defensible as well as best practice.
Implications for School Counselor Preparation
PBIS and RTI frameworks are now firmly established in a majority of U.S. schools. As documented above, research, particularly within the context of special education, largely demonstrates their positive impact on student academic achievement and SEL skill development, as well as on school climate (Horner et al., 2010, 2015; McDaniel, Albritton, & Roach, 2013). However, school counselors in the field report a lack of MTSS knowledge and their roles and functions within at least RTI schools are somewhat inconsistently and ambiguously defined (Ockerman et al., 2015). In some circumstances, school counselors’ MTSS duties may not fully complement their CSCP responsibilities (Goodman-Scott et al., 2016). Given these realities, many school counselor preparation programs need to be revised to effectively account for these limitations. To accomplish this end, the following literature-based action steps are offered. First, counselor educators should conduct a program audit, looking for MTSS curricular and instructional gaps in their school counseling preparation courses. Curriculum mapping (Jacobs, 1997) is a useful tool to recognize program content deficiencies (Howard, 2007). Essentially, the process involves
the identification of the content and skills taught in each course at each level. A calendar-based chart, or “map,” is created for each course so that it is easy to see not only what is taught in a course, but when it is taught. Examination of these maps can reveal both gaps in what is taught and repetition among courses, but its value lies in identifying areas for integration and concepts for spiraling. (Howard, 2007, p. 7)
Second, the various options for program revision should be weighed. The two most obvious alternatives are to either add a separate school counseling-based MTSS course or to augment existing courses and their content. Classes already focusing on topics associated with MTSS theory, research and practice (e.g., special education, at-risk children and adolescents, comprehensive school counseling, strengths-based counseling and advocacy) are perhaps the easiest to modify. Certainly, accreditation standards and requirements, funding implications, and logistical concerns must be considered.
Third, specific MTSS content and related skills should be reviewed and syllabi revised accordingly. To inform decision making and planning, Table 2 provides sample core MTSS content areas associated with school counselor roles and functions. Curriculum changes might involve strengthening these four broad areas: (a) assessment, data usage and research, (b) general knowledge and practices, (c) specific interventions, and (d) systems work. To alleviate potential redundancies in pre-service education, it is imperative that any proposed modifications be aligned with current CSCP training (e.g., ASCA’s [2012a] National Model; see Goodman-Scott et al., 2016 for details). Consult the crosswalk provided in Table 1 to ensure that any course changes are consonant with ASCA’s (2012b) School Counselor Competencies and CACREP (2016) standards.
Table 2
Core MTSS Content Areas Aligned With School Counselor Roles and Functions
|
Content Areas
|
| Assessment, Data Usage and ResearchAcademic and SEL skill assessment and progress monitoringApplied experimental analysis of behavior/functional behavior analysis (FBA)Behavioral consultation assessmentEvidence-based (data-based) decision making and intervention planning (academic and social-behavioral issues)Research methods (e.g., survey, pre/posttest comparison, single subject designs)Student and classroom assessment/testingUse of student assessment and schoolwide data to improve MTSS services and interventions |
| General Knowledge and PracticesBest practices in support of academic and social-behavioral developmentIntegration with comprehensive school counseling programs (e.g., ASCA National Model)Ethical and legal issuesEducational, developmental and psychological theories (e.g., behaviorism, social learning theory, ecological systems theory, cognitive, psychosocial, identity)Effective communicationStudents at risk and resiliency issues (i.e., knowledge of early warning signs of school and social-behavioral problems)Leadership and advocacyMental health issues and associated community servicesModels of consultation
Multicultural/diversity (student, family, school, community) and social justice issues
Referral
Special education (e.g., relevant policies, identification procedures, categories of disability) |
| Specific InterventionsCheck and Connect (Check In, Check Out)Individualized positive behavior support (e.g., behavior change plans, individualized education plans)Peer mentoring/tutoringSchoolwide classroom guidance (academic and SEL skill related)Short-term goal-oriented individual and group counseling |
| Systems WorkCollaboration and coordination of services with counseling staff, MTSS constituents, external resources and familiesConsultation with caregivers, educational staff and external resourcesStaff coaching/liaison work (e.g., conducting workshops and training events to improve conceptual knowledge and understanding as well as skill development)MTSS (PBIS & RTI) structure and components and associated practicesResource providers (in-school and out-of-school options)Policy development addressing improved school environments and barriers to learning for all studentsSystems/interdisciplinary collaboration and leadership within context of comprehensive school counseling programs |
Note. Primary sources: Cowan et al. (2013); Forman & Crystal (2015); R. Freeman et al. (2015); Gibbons & Coulter
(2016); Goodman-Scott et al. (2016); Horner et al. (2015); Ockerman et al. (2015); Reschly & Coolong-Chaffin (2016).
Finally, course syllabi need to be updated to integrate desired curricular changes and appropriate instructional techniques instituted. It is recommended that counselor educators design the MTSS course using a spiral curriculum (Bruner, 1960; Howard, 2007). This theory- and research-based strategy rearranges the course material curriculum and content in such a way that knowledge and skill development and content build upon each another while gradually increasing in complexity and depth. Research informed pedagogy suggests that MTSS course content be taught using a variety of methods, including direct instruction for learning foundational materials and student-centered approaches, such as case studies and problem-based learning (PBL), for the application component (Dumbrigue, Moxley, & Najor-Durack, 2013; Ramsden, 2003; Savery, 2006). Specifically, given that scientific (systematic problem-solving) and data-driven decision making are indispensable educator practices within MTSS frameworks, these skills should be nurtured through “hands on” and highly engaging didactic methods rather than relying on conventional college-level teaching strategies (e.g., recitation, questioning and lecture; Stanford University Center for Teaching and Learning, 2001). Specific activities could be readily implemented during practicum and internship. PBL invites students to tackle complex and authentic (real world) issues that promote understanding of content knowledge as well as interpretation, analytical reasoning, interpersonal communication and self-assessment skills (Amador, Miles, & Peters, 2006; Loyens, Jones, Mikkers, & van Gog, 2015). Problems can take the form of genuine case studies (e.g., a sixth-grader at risk for severe depression), encouraging pre-service counselors to reflect on issues they will face in MTSS schools. Succinctly stated, when developing a new course or refining existing courses to include MTSS elements, counselor educators are encouraged to use research-based methods of curriculum design and student-centered pedagogy.
Conclusion
School counselor roles and functions must be responsive to societal changes and educational reforms. These shifts require university-level counselor preparation programs to be adaptable and open to new practices. K–12 schools around the nation are committed to instituting MTSS (PBIS and RTI) to better educate all students as well as to reduce the number of learners at risk for academic and social and emotional problems. School counselors largely indicate that they require further training on these MTSS frameworks and best practice (Goodman-Scott et al., 2016; Ockerman et al., 2015). It is therefore incumbent upon counselor education programs to revise their curriculum and instruction to meet this growing need. This article provides a clear rationale for instituting pre-service program changes, as well as summarizes MTSS’s theoretical and research foundation. Literature-based recommendations for pre-service course and curricular modifications have been offered. Preparation courses are encouraged to align their MTSS curriculum and content with ASCA’s (2012b) and CACREP’s (2016) school counseling standards, and the role requirements of comprehensive school counseling programs. Subsequent research is needed to determine whether this added level of pre-service education support actually impacts school counselor MTSS competency perceptions, and more importantly, whether schoolchildren and youth are positively impacted by better trained professional school counselors.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interestor funding contributions for the development of this manuscript.
References
Amador, J. A., Miles, L., & Peters, C. B. (2006). The practice of problem-based learning: A guide to implementing PBL in the college classroom. Boston, MA: Anker Publishing.
American School Counselor Association. (2012a). The ASCA national model: A framework for school counseling programs (3rd ed.). Alexandria, VA: Author.
American School Counselor Association. (2012b). ASCA school counselor competencies. Retrieved from
https://www.schoolcounselor.org/asca/media/asca/home/SCCompetencies.pdf
American School Counselor Association. (2014). The school counselor and multitiered system of supports. American School Counselor Association Position Statement. Retrieved from http://schoolcounselor.org/asca/
media/asca/PositionStatements/PS_MultitieredSupportSystem.pdf
Bradley, R., Danielson, L., & Doolittle, J. (2007). Responsiveness to intervention: 1997 to 2007. Teaching Exceptional Children, 39(5), 8–12. doi:10.1177/004005990703900502
Bruner, J. (1960). The process of education. Cambridge, MA: Harvard University Press.
Burnham, J. J., & Jackson, C. M. (2000). School counselor roles: Discrepancies between actual practice and exist-
ing models. Professional School Counseling, 4, 41–49.
Carter, D. R., & Van Norman, R. K. (2010). Class-wide positive behavior support in preschool: Improving teacher implementation through consultation. Early Childhood Education Journal, 38, 279–288.
Childs, K. E., Kincaid, D., George, H. P., & Gage, N. A. (2015). The relationship between school-wide imple- mentation of positive behavior intervention and supports and student discipline outcomes. Journal of Positive Behavior Interventions, 18(2), 89–99. doi:10.1177/1098300715590398
Council for Accreditation of Counseling and Related Educational Programs. (2016). 2016 CACREP standards. Retrieved from http://www.cacrep.org/for-programs/2016-cacrep-standards
Cowan, K. C., Vaillancourt, K., Rossen, E., & Pollitt, K. (2013). A framework for safe and successful schools [Brief]. Bethesda, MD: National Association of School Psychologists. Retrieved from https://www.nasponline.
org/Documents/Research%20and%20Policy/Advocacy%20Resources/Framework_for_Safe_and_Suc
cessful_School_Environments.pdf
Donohue, M. D. (2014). Implementing positive behavioral interventions and supports: School counselors’ perceptions of student outcomes, school climate and professional effectiveness. Retrieved from http://works.bepress.com/ margaret_donohue/1
Dumbrigue, C., Moxley, D., & Najor-Durack, A. (2013). Keeping students in higher education: Successful practices and strategies for retention. New York, NY: Routledge.
Eber, L., Weist, M., & Barrett, S. (n.d.). An introduction to the interconnected systems framework. In S. Barrett, L. Eber, & M. West (Eds.), Advancing education effectiveness: Interconnecting school mental health and school-wide positive behavior support (pp. 3–17). [Monograph]. Retrieved from https://www.pbis.org/common/cms/files/Current%20Topics/Final-Monograph.pdf
Forman, S. G., & Crystal, C. D. (2015). Systems consultation for multitiered systems of supports (MTSS): Imple- mentation issues. Journal of Educational and Psychological Consultation, 25, 276–285.
doi:10.1080/10474412.2014.963226
Freeman, J., Simonsen, B., McCoach, D. B., Sugai, G. M., Lombardi, A., & Horner, R. (2015). An analysis of the relationship between implementation of school-wide positive behavior interventions and support and high school dropout rates. High School Journal, 98, 290–315.
Freeman, J., Simonsen, B., McCoach, D. B., Sugai, G. M., Lombardi, A., & Horner, R. (2016). Relationship
between school-wide positive behavior interventions and supports and academic, attendance, and
behavior outcomes in high schools. Journal of Positive Behavior Interventions, 18, 41–51.
Freeman, R., Miller, D., & Newcomer, L. (2015). Integration of academic and behavioral MTSS at the district level using implementation science. Learning Disabilities: A Contemporary Journal, 13, 59–72.
Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it?
Reading Research Quarterly, 41, 93–99. doi:10.1598/RRQ.41.1.4
Gibbons, K., & Coulter, W. (2016). Making response to intervention stick: Sustaining implementation past your retirement. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to inter-
vention: The science and practice of multi-tiered systems of support (2nd ed.; pp. 641–660). New York, NY: Springer.
Goodman-Scott, E. (2013). Maximizing school counselors’ efforts by implementing school-wide positive
behavioral interventions and supports: A case study from the field. Professional School Counseling, 17, 111–119.
Goodman-Scott, E. (2015). School counselors’ perceptions of their academic preparedness and job activities. Counselor Education and Supervision, 54, 57–67.
Goodman-Scott, E., Betters-Bubon, J., & Donohue, P. (2016). Aligning comprehensive school counseling pro-
grams and positive behavioral interventions and supports to maximize school counselors’ efforts.
Professional School Counseling, 19, 57–67.
Greenwood, C. R., Bradfield, T., Kaminski, R. A., Linas, M., Carta, J. J., & Nylander, D. (2011). The response to intervention (RTI) approach in early childhood. Focus on Exceptional Children, 43(9), 1–22.
Gysbers, N. C., & Henderson, P. (2012). Developing & managing your school guidance & counseling programs (5th ed.). Alexandria, VA: American Counseling Association.
Horner, R. H., Sugai, G. M., & Anderson, C. M. (2010). Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 1–14.
Horner, R. H., Sugai, G. M., & Lewis, T. (2015). Is school-wide positive behavior support an evidence-based practice? Retrieved from http://www.pbis.org/research
Howard, J. (2007). Curriculum development. Retrieved from http://www.pdx.edu/sites/www.pdx.edu.cae/files/
media_assets/Howard.pdf
Jacobs, H. H. (1997). Mapping the big picture: Integrating curriculum and assessment K-12. Alexandria, VA:
Association for Supervision and Curriculum Development.
Johnsen, S. K., Parker, S. L., & Farah, Y. N. (2015). Providing services for students with gifts and talents within a Response-to-Intervention framework. Teaching Exceptional Children, 47, 226–233.
Jolivette, K., & Nelson, C. M. (2010). Adapting positive behavioral interventions and supports for secure juve- nile justice settings: Improving facility-wide behavior. Behavioral Disorders, 36, 28–42.
Kansas MTSS. (2011). Kansas multi-tier system of supports student improvement teams and the multi-tier system of supports. Retrieved from http://www.kansasmtss.org/pdf/briefs/SIT_and_MTSS.pdf
Klingner, J. K., & Edwards, P. A. (2006). Cultural considerations with response to intervention models. Reading Research Quarterly, 41, 108–117.
Kozleski, E. B., & Huber, J. J. (2010). Systemic change for RTI: Key shifts for practice. Theory Into Practice, 49, 258–264. doi:10.1080/00405841.2010.510696
Lemberger, M. E., Selig, J. P., Bowers, H., & Rogers, J. E. (2015). Effects of the Student Success Skills Program on
executive functioning skills, feelings of connectedness, and academic achievement in a predominantly Hispanic, low-income middle school district. Journal of Counseling & Development, 93, 25–37. doi:10.1002/j.1556-6676.2015.00178.x
Loyens, S. M. M., Jones, S. H., Mikkers, J., & van Gog, T. (2015). Problem-based learning as a facilitator of
conceptual change. Learning and Instruction, 38, 34–42.
Martens, K., & Andreen, K. (2013). School counselors’ involvement with a school-wide positive behavior support system: Addressing student behavior issues in a proactive and positive manner. Professional School Counseling, 16, 313–322. doi:10.5330/PSC.n.2013-16.313
Martin, I., & Carey, J. C. (2014). Key findings and international implications of policy research on school coun-
seling models in the United States. Journal of Asia Pacific Counseling, 4, 87–102.
Maynard, B. R., Kjellstrand, E. K., & Thompson, A. M. (2013). Effects of Check and Connect on attendance, behavior, and academics: A randomized effectiveness trial. Research on Social Work Practice, 24, 296–309. doi:10.1177/1049731513497804
McDaniel, S., Albritton, K., & Roach, A. (2013). Highlighting the need for further response to intervention research in general education. Research in Higher Education Journal, 20, 1–14. Retrieved from http:// jupapadoc.startlogic.com/manuscripts/131467.pdf
McIntosh, K., Filter, K. J., Bennett, J. L., Ryan, C., & Sugai, G. (2010). Principles of sustainable prevention: Designing scale-up of school-wide positive behavior support to promote durable systems. Psychology in the
Schools, 47, 5–21. doi:10.1002/pits.20448
Molloy, L. E., Moore, J. E., Trail, J., Van Epps, J. J., & Hopfer, S. (2013). Understanding real-world implementa-
tion quality and “active ingredients” of PBIS. Prevention Science, 14, 593–605.
National Board for Professional Teaching Standards. (2012). School counseling standards for school counselors of students ages 3–18+. Retrieved from http://boardcertifiedteachers.org/sites/default/files/ECYA-SC.pdf
Ockerman, M. S., Mason, E. C. M., & Hollenbeck, A. F. (2012). Integrating RTI with school counseling programs: Being a proactive professional school counselor. Journal of School Counseling, 10(15), 1–37. Retrieved from http://jsc.montana.edu/articles/v10n15.pdf
Ockerman, M. S., Patrikakou, E., & Feiker Hollenbeck, A. (2015). Preparation of school counselors and
response to intervention: A profession at the crossroads. The Journal of Counselor Preparation and Supervision, 7, 161–184. doi:10.7729/73.1106
Owen, J. (2012). The educational efficiency of employing a three-tier model of academic supports: Providing early, effective assistance to students who struggle. The International Journal of Knowledge, Culture and Change Management, 11(6), 95–106.
PBIS.org. (2016). Tier 1 case examples. Retrieved from https://www.pbis.org/school/primary-level/case-examples
Preston A. I., Wood, C. L, & Stecker, P. M. (2016). Response to intervention: Where it came from and where it’s going. Preventing School Failure: Alternative Education for Children and Youth, 60, 173–182.
Proctor, S. L., Graves, S. L., Jr., & Esch, R. C. (2012). Assessing African American students for specific learning disabilities: The promises and perils of Response to Intervention. Journal of Negro Education, 81, 268–282.
Ramsden, P. (2003). Learning to teach in higher education (2nd ed.). New York, NY: Routledge.
Reschly, A. L., & Cooloong-Chaffin, M. (2016). Contextual influences and response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of multi-tiered systems of support (2nd ed.; pp. 441–453). New York, NY: Springer.
Ross, G. (n.d.). The community is McNabb Elementary. Retrieved from http://www.pbis.org/common/cms
/files/pbisresources/201_08_03_McNabbPBIS.pdf
Ryan, T., Kaffenberger, C. J, & Carroll, A. G. (2011). Response to intervention: An opportunity for school counselor leadership. Professional School Counseling, 14, 211–221.
Saeki, E., Jimerson, S. R., Earhart, J., Hart, S. R., Renshaw, T., Singh, R. D., & Stewart, K. (2011). Response to intervention (RTI) in the social, emotional, and behavioral domains: Current challenges and emerging possibilities. Contemporary School Psychology, 15, 43–52.
Sandomierski, T., Kincaid, D., & Algozzine, B. (2007). Response to Intervention and Positive Behavior Support: Broth- ers from different mothers or sisters with different misters? Retrieved from http://www.pbis.org/common/cms/ files/Newsletter/Volume4%20Issue2.pdf
Santos de Barona, M., & Barona, A. (2006). School counselors and school psychologists: Collaborating to ensure minority students receive appropriate consideration for special educational programs. Professional School Counseling, 10, 3–13.
Savery, J. R. (2006). Overview of problem-based learning: Definitions and distinctions. Interdisciplinary Journal of Problem-Based Learning, 1. doi:10.7771/1541-5015.1002
Schmidt, J. J. (2014). Counseling in schools: Comprehensive programs of responsive services for all students (6th ed.). Boston, MA: Pearson Higher Education.
Schulte, A. C. (2016). Prevention and response to intervention: Past, present, and future. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of multi-tiered systems of support (2nd ed.; pp. 59–71). New York, NY: Springer.
Shepard, J. M., Shahidullah, J. D., & Carlson, J. S. (2013). Counseling students in levels 2 and 3: A PBIS/RTI guide. Thousand Oaks, CA: Corwin/Sage.
Sink, C. A. (2005). Contemporary school counseling: Theory, research, and practice. Boston, MA: Houghton-Mifflin/ Cengage
Sink, C. A., Cooney, M., & Adkins, C. (in press). Conducting large-scale evaluation studies to identify charac- teristics of effective comprehensive school counseling programs. In J. C. Carey, B. Harris, S. M. Lee, & J. Mushaandja (Eds.), International handbook for policy research on school-based counseling. New York, NY: Springer.
Sprague, J. R., Scheuermann, B., Wang, E. W., Nelson, C. M., Jolivette, K., & Vincent, C. (2013). Adopting and adapting PBIS for secure juvenile justice settings: Lessons learned. Education and Treatment of Children, 36, 121–134.
Stanford University Center for Teaching and Learning. (2001). Problem-based learning. Speaking of Teaching, 11, 1–7.
Stoiber, K. C., & Gettinger, M. (2016). Multi-tiered systems of support and evidence-based practices. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of multi-tiered systems of support (2nd ed.; pp. 121–141). New York, NY: Springer.
Sugai, G., Horner, R. H., Dunlap, G., Hieneman, M., Lewis, T. J., Nelsen C. M., . . . Turnbull, R. H. (2000). Applying positive behavior support and functional behavioral assessments in schools. Journal of Positive Behavior Interventions, 2(3), 131–143. Retrieved from http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1031&context=gse_fac
Sugai, G., & Simonsen, B. (2012). Positive behavioral interventions and supports: History, defining features, and misconceptions. Center for PBIS & Center for Positive Behavioral Interventions and Supports, 1–8. Retrieved from http://idahotc.com/Portals/6/Docs/2015/Tier_1/articles/PBIS_history.features.miscon
ceptions.pdf
Turnbull, A., Bohanon, H., Griggs, P., Wickham, D., Salior, W., Freeman, R., . . . Warren, J. (2002). A blueprint for schoolwide positive behavior support: Implementation of three components. Exceptional Children, 68, 377–402. Retrieved from http://ecommons.luc.edu/cgi/viewcontent.cgi?article=1023&context=
education_facpubs
Turse, K. A., & Albrecht, S. F. (2015). The ABCs of RTI: An introduction to the building blocks of Response to Intervention. Preventing School Failure: Alternative Education for Children and Youth, 59(2), 83–89.
Warren, J. M., & Robinson, G. (2015). Addressing barriers to effective RTI through school counselor consulta- tion: A social justice approach. Electronic Journal for Inclusive Education, 3(4), 1–27. Retrieved from http:// libres.uncg.edu/ir/uncp/f/Addressing%20Barriers%20to%20Effective%20RTI%20
through%20School%20Counselor%20Consultation.pdf
Wilkerson, K. A., Pérusse, R., & Hughes, A. (2013). Comprehensive school counseling programs and student achievement outcomes: A comparative analysis of RAMP versus non-RAMP schools. Professional School Counseling, 16, 172–184.
Zambrano, E., Castro-Villarreal, F., & Sullivan, J. (2012). School counselors and school psychologists: Partners in collaboration for student success within RTI and CDCGP frameworks. Journal of School Counseling, 10(24). Retrieved from http://jsc.montana.edu/articles/v10n24.pdf
Christopher A. Sink, NCC, is a Professor at Old Dominion University. Correspondence can be addressed to Christopher Sink, Darden College of Education, 5115 Hampton Blvd, Norfolk, VA 23529, csink@odu.edu.
Mar 24, 2016 | Article, Volume 6 - Issue 1
Chad M. Yates, Courtney M. Holmes, Jane C. Coe Smith, Tiffany Nielson
Implementing continuous feedback loops between clients and counselors has been found to have significant impact on the effectiveness of counseling (Shimokawa, Lambert, & Smart, 2010). Feedback informed treatment (FIT) systems are beneficial to counselors and clients as they provide clinicians with a wide array of client information such as which clients are plateauing in treatment, deteriorating or at risk for dropping out (Lambert, 2010; Lambert, Hansen, & Finch, 2001). Access to this type of information is imperative because counselors have been shown to have poor predictive validity in determining if clients are deteriorating during the counseling process (Hannan et al., 2005). Furthermore, recent efforts by researchers show that FIT systems based inside university counseling centers have beneficial training features that positively impact the professional development of counseling students (Reese, Norsworthy, & Rowlands, 2009; Yates, 2012). To date, however, few resources exist on how to infuse FIT systems into counselor education curriculum and training programs.
This article addresses the current lack of information regarding the implementation of a FIT system within counselor education curricula by discussing: (1) an overview and implementation of a FIT system; (2) a comprehensive review of the psychometric properties of three main FIT systems; (3) benefits that the use of FIT systems hold for counselors-in-training; and (4) how the infusion of FIT systems within a counseling curriculum can help assess student learning outcomes.
Overview and Implementation of a FIT System
FIT systems are continual assessment procedures that include weekly feedback about a client’s current symptomology and perceptions of the therapeutic process in relation to previous counseling session scores. These systems also can include other information such as self-reported suicidal ideation, reported substance use, or other specific responses (e.g., current rating of depressive symptomology). FIT systems compare clients’ current session scores to previous session scores and provide a recovery trajectory, often graphed, that can help counselors track the progress made through the course of treatment (Lambert, 2010). Some examples of a FIT system include the Outcome Questionnaire (OQ-45.2; Lambert et al., 1996), Session Rating Scale (SRS; Miller, Duncan, & Johnson, 2000), Outcome Rating Scale (ORS; Miller & Duncan, 2000), and the Counseling Center Assessment of Psychological Symptoms (CCAPS; Locke et al., 2011), all of which are described in this article.
Variety exists regarding how FIT systems are used within the counseling field. These variations include the selected measure or test, frequency of measurement, type of feedback given to counselors and whether or not feedback is shared with clients on a routine basis. Although some deviations exist, all feedback systems contain consistent procedures that are commonly employed when utilizing a system during practice (Lambert, Hansen, & Harmon, 2010). The first procedure in a FIT system includes the routine measurement of a client’s symptomology or distress during each session. This frequency of once-per-session is important as it allows counselors to receive direct, continuous feedback on how the client is progressing or regressing throughout treatment. Research has demonstrated that counselors who receive regular client feedback have clients that stay in treatment longer (Shimokawa et al., 2010); thus, the feedback loop provided by a FIT system is crucial in supporting clients through the therapeutic process.
The second procedure of a FIT system includes showcasing the results of the client’s symptomology or distress level in a concise and usable way. Counselors who treat several clients benefit from accessible and comprehensive feedback forms. This ease of access is important because counselors may be more likely to buy in to the use of feedback systems if they can use them in a time-effective manner.
The last procedure of FIT systems includes the adjustment of counseling approaches based upon the results of the feedback. Although research in this area is limited, some studies have observed that feedback systems do alter the progression of treatment. Lambert (2010) suggested that receiving feedback on what is working is apt to positively influence a counselor to continue these behaviors. Yates (2012) found that continuous feedback sets benchmarks of performance for both the client and the counselor, which slowly alters treatment approaches. If the goal of counseling is to decrease symptomology or increase functioning, frequently observing objective progress toward these goals using a FIT system can help increase the potential for clients to achieve these goals through targeted intervention.
Description of Three FIT Systems
Several well-validated, reliable, repeated feedback instruments exist. These instruments vary by length and scope of assessment, but all are engineered to deliver routine feedback to counselors regarding client progress. Below is a review of three of the most common FIT systems utilized in clinical practice.
The OQ Measures System
The OQ Measures System uses the Outcome Questionnaire 45.2 (OQ-45.2; Lambert et al., 1996), a popular symptomology measure that gauges a client’s current distress levels over three domains: symptomatic distress, interpersonal relations and social roles. Hatfield and Ogles (2004) listed the OQ 45.2 as the third most frequently used self-report outcome measure for adults in the United States. The OQ 45.2 has 45 items and is rated on a 5-point Likert scale. Scores range between 0 and 180; higher scores suggest higher rates of disturbance. The OQ 45.2 takes approximately 5–6 minutes to complete and the results are analyzed using the OQ Analyst software provided by the test developers. The OQ 45.2 can be delivered by paper and pencil versions or computer assisted administration via laptop, kiosk, or personal digital assistant (PDA). Electronic administration of the OQ 45.2 allows for seamless administration, scoring and feedback to both counselor and client.
Internal consistency for the OQ 45.2 is α = 0.93 and test-retest reliability is r = 0.84. The OQ 45.2 demonstrated convergent validity with the General Severity Index (GSI) of the Symptom Checklist 90-Revised (SCL-90-R; Derogatis, 1983; r = .78, n = 115). The Outcome Questionnaire System has five additional outcome measures: (1) the Outcome Questionnaire 30 (OQ-30); (2) the Severe Outcome Questionnaire (SOQ), which captures outcome data for more severe presenting concerns, such as bipolar disorder and schizophrenia; (3) the Youth Outcome Questionnaire (YOQ), which assesses outcomes in children between 13 and 18 years of age; (4) the Youth Outcome Questionnaire 30, which is a brief version of the full YOQ; and (5) the Outcome Questionnaire 10 (OQ-10), which is used as a brief screening instrument for psychological symptoms (Lambert et al., 2010).
The Partners for Change Outcome Management System (PCOMS)
The Partners for Change Outcome Management System (PCOMS) uses two instruments, the Outcome Rating Scale (ORS; Miller & Duncan, 2000) that measures the client’s session outcome, and the Session Rating Scale (SRS; Miller et al., 2000) that measures the client’s perception of the therapeutic alliance. The ORS and SRS were designed to be brief in response to the heavy time demands placed upon counselors. Administration of the ORS includes handing the client a copy of the ORS on a sheet of letter sized paper; the client then draws a hash mark on four distinct 10-centimeter lines that indicate how he or she felt over the last week on the following scales: individually (personal well-being), interpersonally (family and close relationships), socially (work, school and friendships), and overall (general sense of well-being).
The administration of the SRS includes four similar 10-centimeter lines that evaluate the relationship between the client and counselor. The four lines represent relationship, goals and topics, approach or methods, and overall (the sense that the session went all right for me today; Miller et al., 2000). Scoring of both instruments includes measuring the location of the client’s hash mark and assigning a numerical value based on its location along the 10-centimeter line. Measurement flows from left to right, indicating higher-level responses the further right the hash mark is placed. A total score is computed by adding each subscale together. Total scores are graphed along a line plot. Miller and Duncan (2000) used the reliable change index formula (RCI) to establish a clinical cut-off score of 25 and a reliable change index score of 5 points for the ORS. The SRS has a cut-off score of 36, which suggests that total scores below 36 indicate ruptures in the working alliance.
The ORS demonstrated strong internal reliability estimates (α = 0.87-.096), a test-retest score of r = 0.60, and moderate convergent validity with measures like the OQ 45.2 (r = 0.59), which it was created to resemble (Miller & Duncan, 2000; Miller, Duncan, Brown, Sparks, & Claud, 2003). The SRS had an internal reliability estimate of α = 0.88, test-retest reliability of r = 0.74, and showed convergent validity when correlated with similar measures of the working alliance such as the Helping Alliance Questionnaire–II (HAQ–II; Duncan et al., 2003; Luborsky et al., 1996). The developers of the ORS and SRS have also created Web-based administration features that allow clients to use both instruments online using a pointer instead of a pencil or pen. The Web-based administration also calculates the totals for the instruments and graphs them.
The Counseling Center Assessment of Psychological Symptoms (CCAPS)
The CCAPS was designed as a semi-brief continuous measure that assesses symptomology unique to college-aged adults (Locke et al., 2011). When developed, the CCAPS was designed to be effective in assessing college students’ concerns across a diverse range of college campuses. The CCAPS has two separate versions, the CCAPS-62 and a shorter version, the CCAPS-34. The CCAPS-62 has 62 test items across eight subscales that measure: depression, generalized anxiety, social anxiety, academic distress, eating concerns, family distress, hostility and substance abuse. The CCAPS-34 has 34 test items across seven of the scales found on the CCAPS-62, excluding family distress. Additionally, the substance use scale on the CCAPS-62 is renamed the Alcohol Use Scale on the CCAPS-32 (Locke et al., 2011). Clients respond on a 5-point Likert scale with responses that range from not at all like me to extremely like me. On both measures clients are instructed to answer each question based upon their functioning over the last 2 weeks. The CCAPS measures include a total score scale titled the Distress Index that measures the amount of general distress experienced over the previous 2 weeks (Center for Collegiate Mental Health, 2012). The measures were designed so that repeated administration would allow counselors to compare each session’s scores to previous scores, and to a large norm group (N = 59,606) of clients completing the CCAPS at university counseling centers across the United States (Center for Collegiate Mental Health, 2012).
The CCAPS norming works by comparing clients’ scores to a percentile score of other clients who have taken the measure. For instance, a client’s score of 80 on the depressive symptoms scale indicates that he or she falls within the 80th percentile of the norm population’s depressive symptoms score range. Because the CCAPS measures utilize such a large norm base, the developers have integrated the instruments into the Titanium Schedule ™, an Electronic Medical Records (EMR) system. The developers also offer the instruments for use in an Excel scoring format, along with other counseling scheduling software programs. The developers of the CCAPS use RCI formulas to provide upward and downward arrows next to the reported score on each scale. Downward arrows indicate the client’s current score is significantly different than previous sessions’ scores and suggests progress during counseling. An upward arrow would suggest a worsening of symptomology. Cut-off scores vary across scales and can be referenced in the CCAPS 2012 Technical Manual (Center for Collegiate Mental Health, 2012).
Test-retest estimates at 2 weeks for the CCAPS-62 and CCAPS-34 scales range between r = 0.75–0.91 (Center for Collegiate Mental Health, 2012). The CCAPS-34 also demonstrated a good internal consistency that ranged between α = 0.76–0.89 (Locke et al., 2012). The measures also demonstrated adequate convergent validity compared to similar measures. A full illustration of the measures’ convergent validity can be found in the CCAPS 2012 Technical Manual (Center for Collegiate Mental Health, 2012).
Benefits for Counselors-in-Training
The benefits of FIT systems are multifaceted and can positively impact the growth and development of student counselors (Reese, Norsworthy, et al., 2009; Schmidt, 2014; Yates, 2012). Within counselor training laboratories, feedback systems have shown promise in facilitating the growth and development of beginning counselors (Reese, Usher, et al., 2009), and the incorporation of FIT systems into supervision and training experiences has been widely supported (Schmidt, 2014; Worthen & Lambert, 2007; Yates, 2012).
One such benefit is that counseling students’ self-efficacy improved when they saw evidence of their clients’ improvement (Reese, Usher, et al., 2009). A FIT system allows for the documentation of a client’s progress and when counseling students observed their clients making such progress, their self-efficacy improved regarding their skill and ability as counselors. Additionally, the FIT system allowed the counselor trainees to observe their effectiveness during session, and more importantly, helped them alter their interventions when clients deteriorated or plateaued during treatment. Counselor education practicum students who implemented a FIT system through client treatment reported that having weekly observations of their client’s progress helped them to isolate effective and non-effective techniques they had used during session (Yates, 2012). Additionally, practicum counseling students have indicated several components of FIT feedback forms were useful, including the visual orientation (e.g., graphs) to clients’ shifts in symptomology. This visual attenuation to client change allowed counselors-in-training to be more alert to how clients are actually faring in between sessions and how they could tailor their approach, particularly regarding crisis situations (Yates, 2012).
Another benefit discovered from the above study was that counseling students felt as if consistent use of a FIT system lowered their anxiety and relieved some uncertainty regarding their work with clients (Yates, 2012). It is developmentally appropriate for beginning counselors to struggle with low tolerance for ambiguity and the need for a highly structured learning environment when they begin their experiential practicums and internships (Bernard & Goodyear, 2013). The FIT system allows for a structured format to use within the counseling session that helps to ease new counselors’ anxiety and discomfort with ambiguity.
Additionally, by bringing the weekly feedback into counseling sessions, practicum students were able to clarify instances when the feedback was discrepant from how the client presented during session (Yates, 2012). This discrepancy between what the client reported on the measure and how they presented in session was often fertile ground for discussion. Counseling students believed bringing these discrepancies to a client’s attention deepened the therapeutic alliance because the counselor was taking time to fully understand the client (Yates, 2012).
Several positive benefits are added to the clinical supervision of counseling students. One such benefit is that clinical supervisors found weekly objective reports of their supervisees helpful in providing evidence of a client’s progress during session that was not solely based upon their supervisees’ self-report. This is crucial because relying on self-report as a sole method of supervision can be an insufficient way to gain information about the complexities of the therapeutic process (Bernard & Goodyear, 2013). Supervisors and practicum students both reported that the FIT system frequently brought to their attention potential concerns with clients that they had missed (Yates, 2012). A final benefit is that supervisees who utilized a FIT system during supervision had significantly higher satisfaction levels of supervision and stronger supervisory alliances than students who did not utilize a FIT system (Grossl, Reese, Norsworthy, & Hopkins, 2014; Reese, Usher, et al., 2009).
Benefits for Clients
Several benefits exist for counseling clients when FIT systems are utilized in the therapeutic process. The sharing of objective progress information with clients has been found to be perceived as helpful and a generally positive experience by clients (Martin, Hess, Ain, Nelson, & Locke, 2012). Surveying clients using a FIT system, Martin et al. (2012) found that 74.5% of clients found it “convenient” to complete the instrument during each session. Approximately 46% of the clients endorsed that they had a “somewhat positive” experience using the feedback system, while 20% of clients reported a “very positive” experience. Hawkins, Lambert, Vermeersch, Slade, and Tuttle (2004) found that providing feedback to both clients and counselors significantly increased the clients’ therapeutic improvement in the counseling process when compared to counselors who received feedback independently. A meta-analysis of several research studies, including Hawkins et al. (2004), found effect sizes of clinical efficacy related to providing per-session feedback ranged from 0.34 to 0.92 (Shimokawa et al., 2010). These investigations found more substantial improvement in clients whose counselors received consistent client feedback when compared with counselors who received no client feedback regarding the therapeutic process and symptomology. These data also showed that consistent feedback provision to clients resulted in an overall prevention of premature treatment termination (Lambert, 2010).
Utilization of FIT Systems for Counseling Curriculum and Student Learning Outcome Assessment
The formal assessment of graduate counseling student learning has increased over the past decade. The most recent update of the national standards from the Council for Accreditation of Counseling and Related Educational Programs (CACREP) included the requirement for all accredited programs to systematically track students at multiple points with multiple measures of student learning (CACREP, 2015, Section 4, A, B, C, D, E). Specifically, “counselor education programs conduct formative and summative evaluations of the student’s counseling performance and ability to integrate and apply knowledge throughout the practicum and internship” (CACREP, 2015, Section 4.E). The use of continuous client feedback within counselor education is one way to address such assessment requirements (Schmidt, 2014).
Counseling master’s programs impact students on both personal and professional levels (Warden & Benshoff, 2012), and part of this impact stems from ongoing and meaningful evaluation of student development. The development of counselors-in-training during experiential courses entails assessment of a myriad of counseling competencies (e.g., counseling microskills, case conceptualization, understanding of theory, ethical decision-making and ability to form a therapeutic relationship with clients; Haberstroh, Duffey, Marble, & Ivers, 2014). As per CACREP standards, counseling students will receive feedback during and after their practicum and internship experiences. This feedback typically comes from both the supervising counselor on site, as well as the academic department supervisor.
Additionally, “supervisors need to help their supervisees develop the ability to make effective decisions regarding the most appropriate clinical treatment” (Owen, Tao, & Rodolfa, 2005, p. 68). One suggested avenue for developing such skills is client feedback using FIT systems. The benefit of direct client feedback on the counseling process has been well documented (Minami et al., 2009), and this process can also be useful to student practice and training. Counseling students can greatly benefit from the use of client feedback throughout their training programs (Reese, Usher, et al., 2009). In this way, counselors-in-training learn to acknowledge client feedback as an important part of the counseling process, allowing them to adjust their practice to help each client on an individual basis. Allowing for a multi-layered feedback model wherein the counselor-in-training can receive feedback from the client, site supervisor and academic department supervisor has the potential to maximize student learning and growth.
Providing students feedback for growth through formal supervision is one of the hallmarks of counseling programs (Bernard & Goodyear, 2013). However, a more recent focus throughout higher education is the necessity of assessment of student learning outcomes (CACREP, 2015). This assessment can include “systematic evaluation of students’ academic, clinical, and interpersonal progress as guideposts for program improvement” (Haberstroh et al., 2014, p. 28). As such, evaluating student work within the experiential courses (e.g., practicum and internship) is becoming increasingly important.
FIT systems provide specific and detailed client feedback regarding clients’ experiences within therapy. Having access to documented client outcomes and progress throughout the counseling relationship can provide an additional layer of information regarding student growth and skill development. For instance, if a student consistently has clients who drop out or show no improvement over time, those outcomes could represent a problem or unaddressed issue for the counselor-in-training. Conversely, if a student has clients who report positive outcomes over time, that data could show clinical understanding and positive skill development.
Student learning outcomes can be assessed in a myriad of ways (e.g., FIT systems, supervisor evaluations, student self-assessment and exams; Haberstroh et al., 2014). Incorporating multiple layers of feedback for counseling students allows for maximization of learning through practicum and internships and offers a concrete way to document and measure student outcomes.
An Example: Case Study
Students grow and develop through a wide variety of methods, including feedback from professors, supervisors and clients (Bernard & Goodyear, 2013). Implementing a FIT system into experiential classes in counseling programs allows for the incorporation of structured, consistent and reliable feedback. We use a case example here to illustrate the benefits of such implementation. Within the case study, each CACREP Student Learning Outcome that is met through the implementation of the FIT system is documented.
A counselor educator is the instructor of an internship class where students have a variety of internship placements. This instructor decides to have students implement a FIT system that will allow them to track client progress and the strength of the working alliance. The OQ 45.2 and the SRS measures were chosen because they allow students to track client outcomes and the counseling relationship and are easy to administer, score and interpret. In the beginning of the semester, the instructor provides a syllabus to the students where the following expectations are listed: (1) students will have their clients fill out the OQ 45.2 and the SRS during every session with each client; (2) students will learn to discuss and process the results from the OQ 45.2 and SRS in each session with the client; and (3) students will bring all compiled information from the measures to weekly supervision. By incorporating two FIT systems and the subsequent requirements, the course is meeting over 10 CACREP (2015) learning outcome assessment components within Sections 2 and 3, Professional Counseling Identity (Counseling and Helping Relationships, Assessment and Testing), and Professional Practice.
A student, Sara, begins seeing a client at an outpatient mental health clinic who has been diagnosed with major depressive disorder; the client’s symptoms include suicidal ideation, anhedonia and extreme hopelessness. Sara’s initial response includes anxiety due to the fact that she has never worked with someone who has active suicidal ideation or such an extreme presentation of depressed affect. Sara’s supervisor spends time discussing how she will use the FIT systems in her work with the client and reminds her about the necessities of safety assessment.
In her initial sessions with her client, Sara incorporates the OQ 45.2 and the SRS into her sessions as discussed with her supervisor (CACREP Section 2.8.E; 2.8.K). However, after a few sessions, she does not yet feel confident in her work with this client. Sara feels constantly overwhelmed by the depth of her client’s depression and is worried about addressing the suicidal ideation. Her instructor is able to use the weekly OQ 45.2 and SRS forms as a consistent baseline and guide for her work with this client and to help Sara develop a treatment plan that is specifically tailored for her client based upon the client’s symptomology (CACREP Section 2.5.H, 2.8.L). Using the visual outputs and compiled graphs of weekly data, Sara is able to see small changes that may or may not be taking place for the client regarding his depressive symptoms and overall feelings and experiences in his life. Sara’s instructor guides her to discuss these changes with the client and explore in more detail the client’s experiences within these symptoms (CACREP Section 2.5.G). By using this data with the client, Sara will be better able to help the client develop appropriate and measureable goals and outcomes for the therapeutic process (CACREP Section 2.5.I). Additionally, as a new counselor, such an assessment tool provides Sara with structure and guidance as to the important topics to explore with clients throughout sessions. For example, by using some of the specific content on the OQ 45.2 (e.g., I have thoughts of ending my life, I feel no interest in things, I feel annoyed by people who criticize my drinking, and I feel worthless), she can train herself to assess for suicidal ideation and overall diagnostic criteria (CACREP Section 2.7.C).
Additionally, Sara is receiving feedback from the client by using the SRS measure within session. In using this additional FIT measure, Sara can begin to gauge her personal approach to counseling with this client and receive imperative feedback that will help her grow as a counselor (CACREP, Section 2.5.F). This avenue provides an active dialogue between client and counselor about the work they are doing together and if they are working on the pieces that are important to the client. Her instructor is able to provide both formative and summative feedback on her overall process with the client using his outcomes as a guide to her effectiveness as a clinician (CACREP, Section 3.C). Implementing a FIT system allows for the process of feedback provision to have concrete markers and structure, ultimately allowing for a student counselor to grow in his or her ability to become self-reflective about his or her own practice.
Implications for Counselor Education
The main implications of the integration of FIT systems into counselor education are threefold: (1) developmentally appropriate interventions to support supervisee/trainee clinical growth; (2) intentional measurement of CACREP Student Learning Outcomes; and (3) specific attention to client care and therapeutic outcomes. There are a variety of FIT systems being utilized, and while they vary in scope, length, and targets of assessment, each has a brief administration time and can be repeated frequently for current client status and treatment outcome measurement. With intentionality and dedication, counselor education programs can work to implement the utilization of these types of assessment throughout counselor trainee coursework (Schmidt, 2014).
FIT systems lend themselves to positive benefits for training competent emerging counselors. Evaluating a beginning counselor’s clinical understanding and skills are a key component of assessing overall learning outcomes. When counselors-in-training receive frequent feedback on their clients’ current functioning or session outcomes, they are given the opportunity to bring concrete information to supervision, decide on treatment modifications as indicated, and openly discuss the report with clients as part of treatment. Gathering data on a client’s experience in treatment brings valuable information to the training process. Indications of challenges or strengths with regard to facilitating a therapeutic relationship can be addressed and positive change supported through supervision and skill development. Additionally, by learning the process of ongoing assessment and therapeutic process management, counselor trainees are meeting many of the CACREP Student Learning Outcomes. The integration of FIT systems into client care supports a wide variety of clinical skill sets such as understanding of clinical assessment, managing a therapeutic relationship and treatment planning/altering based on client needs.
Finally, therapy clients also benefit through the use of FIT. Clinicians who receive weekly feedback on per-session client progress consistently show improved effectiveness and have clients who prematurely terminate counseling less often (Lambert, 2010; Shimokawa et al., 2010). In addition to client and counselor benefit, supervisors also have been shown to utilize FIT systems to their advantage. One of the most important responsibilities of a clinical supervisor is to manage and maintain a high level of client care (Bernard & Goodyear, 2013). Incorporation of a structured, validated assessment, such as a FIT system, allows for intentional oversight of the client–counselor relationship and clinical process that is taking place between supervisees and their clients. Overall, the integration of FIT systems into counselor education would provide programs with a myriad of benefits including the ability to meet student, client and educator needs simultaneously.
Conclusion
FIT systems provide initial and ongoing data related to a client’s psychological and behavioral functioning across a variety of concerns. They have been developed and used as a continual assessment procedure to provide a frequent and continuous self-report by clients. FIT systems have been used effectively to provide vital mental health information within a counseling session. The unique features of FIT systems include the potential for recurrent, routine measure of a client’s symptomatology, easily accessible and usable data for counselor and client, and assistance in setting benchmarks and altering treatment strategies to improve a client’s functioning. With intentionality, counselor educator programs can use FIT systems to meet multiple needs across their curriculums including more advanced supervision practices, CACREP Student Learning Outcome Measurement, and better overall client care.
Conflict of Interest and Funding Disclosure
The author reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
Bernard, J. M., & Goodyear, R. K. (2013). Fundamentals of clinical supervision (5th ed.). Boston, MA: Merrill.
Center for Collegiate Mental Health. (2012). CCAPS 2012 technical manual. University Park: Pennsylvania State
University.
The Council for Accreditation of Counseling Related Academic Programs (CACREP). (2015). 2016 accreditation standards. Retrieved from http://www.cacrep.org/for-programs/2016-cacrep-standards
Derogatis, L. R. (1983). The SCL-90: Administration, scoring, and procedures for the SCL-90. Baltimore, MD: Clinical
Psychometric Research.
Duncan, B. L., Miller, S. D., Sparks, J. A., Claud, D. A., Reynolds, L. R., Brown, J., & Johnson, L. D. (2003). The Session Rating Scale: Preliminary psychometric properties of a “working” alliance measure. Journal of Brief Therapy, 3, 3–12.
Grossl, A. B., Reese, R. J., Norsworthy, L. A., & Hopkins, N. B. (2014). Client feedback data in supervision: Effects on supervision and outcome. Training and Education in Professional Psychology, 8, 182–188.
Haberstroh, S., Duffey, T., Marble, E., & Ivers, N. N. (2014). Assessing student-learning outcomes within a counselor education program: Philosophy, policy, and praxis. Counseling Outcome Research and Evaluation, 5, 28–38. doi:10.1177/2150137814527756
Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart, D. W., Shimokawa, K., & Sutton, S. W. (2005). A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology, 61, 155–163.
Hatfield, D., & Ogles, B. M. (2004). The use of outcome measures by psychologists in clinical practice.
Professional Psychology: Research & Practice, 35, 485–491. doi:10.1037/0735-7028.35.5.485
Hawkins, E. J., Lambert, M. J., Vermeersch, D. A., Slade, K. L., & Tuttle, K. C. (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research, 14, 308–327. doi:10.1093/ptr/kph027
Lambert, M. J. (2010). Prevention of treatment failure: The use of measuring, monitoring, & feedback in clinical practice.
Washington, DC: American Psychological Association.
Lambert, M. J., Hansen, N. B., & Finch, A. E. (2001). Patient-focused research: Using patient outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology, 69, 159–172.
Lambert, M. J., Hansen, N. B., & Harmon, S. C. (2010). Outcome Questionnaire system (The OQ system): Development and practical applications in healthcare settings. In M. Barkham, G. Hardy, & J. Mellor-Clark (Eds.), Developing and delivering practice-based evidence: A guide for the psychological therapies (pp. 141–154). New York, NY: Wiley-Blackwell.
Lambert, M. J., Hansen, N. B., Umphress, V., Lunnen, K., Okiishi, J., Burlingame, G. M., & Reisinger, C. (1996). Administration and scoring manual for the OQ 45.2. Stevenson, MD: American Professional Credentialing Services.
Locke, B. D., Buzolitz, J. S., Lei, P. W., Boswell, J. F., McAleavey, A. A., Sevig, T. D., Dowis, J. D. & Hayes, J.
(2011). Development of the Counseling Center Assessment of Psychological Symptoms-62 (CCAPS-62).
Journal of Counseling Psychology, 58, 97–109.
Locke, B. D., McAleavey, A. A., Zhao, Y., Lei, P., Hayes, J. A., Castonguay, L. G., Li, H., Tate, R., & Lin, Y. (2012). Development and initial validation of the Counseling Center Assessment of Psychological Symptoms-34 (CCAPS-34). Measurement and Evaluation in Counseling and Development, 45, 151–169. doi:10.1177/0748175611432642
Luborsky, L., Barber, J. P., Siqueland, L., Johnson, S., Najavits, L. M., Frank, A., & Daley, D. (1996). The Helping
Alliance Questionnaire (HAQ–II): Psychometric properties. The Journal of Psychotherapy Practice and
Research, 5, 260–271.
Martin, J. L., Hess, T. R., Ain, S. C., Nelson, D. L., & Locke, B. D. (2012). Collecting multidimensional client data using repeated measures: Experiences of clients and counselors using the CCAPS-34. Journal of College Counseling, 15, 247–261. doi:10.1002/j.2161-1882.2012.00019.x
Miller, S., & Duncan, B. (2000). The outcome rating scale. Chicago, IL: International Center for Clinical Excellence.
Miller, S., Duncan, B., & Johnson, L. (2000). The session rating scale. Chicago, IL: International Center for Clinical
Excellence.
Miller, S. D., Duncan, B. L., Brown, J., Sparks, J. A., & Claud, D. A. (2003). The Outcome Rating Scale: A
preliminary study of the reliability, validity, and feasibility of a brief visual analog measure. Journal of
Brief Therapy, 2, 91–100.
Minami, T., Davies, D. R., Tierney, S. C., Bettmann, J. E., McAward, S. M., Averill, L. A., & Wampold, B. E. (2009). Preliminary evidence on the effectiveness of psychological treatments delivered at a university counseling center. Journal of Counseling Psychology, 56, 309–320.
Owen, J., Tao, K. W., & Rodolfa, E. R. (2005). Supervising counseling center trainees in the era of evidence-based practice. Journal of College Student Psychotherapy, 20, 66–77.
Reese, R. J., Norsworthy, L. A., & Rowlands, S. R. (2009). Does a continuous feedback system improve psychotherapy outcome? Psychotherapy: Theory, Research, Practice, Training, 46, 418–431.
doi:10.1037/a0017901
Reese, R. J., Usher, E. L., Bowman, D. C., Norsworthy, L. A., Halstead, J. L., Rowlands, S. R., & Chisolm, R.
R. (2009). Using client feedback in psychotherapy training: An analysis of its influence on supervision
and counselor self-efficacy. Training and Education in Professional Psychology, 3, 157–168.
doi:10.1037/a0015673
Schmidt, C. D. (2014). Integrating continuous client feedback into counselor education. The Journal of Counselor Preparation and Supervision, 6, 60–71. doi:10.7729/62.1094
Shimokawa, K., Lambert, M. J., & Smart, D. W. (2010). Enhancing treatment outcome of patients at risk of treatment failure: Meta-analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting and Clinical Psychology, 78, 298–311. doi:10.1037/a0019247
Warden, S. P., & Benshoff, J. M. (2012). Testing the engagement theory of program quality in CACREP-accredited counselor education programs. Counselor Education and Supervision, 51, 127–140.
doi:10.1002/j.1556-6978.2012.00009.x
Worthen, V. E., & Lambert, M. J. (2007). Outcome oriented supervision: Advantages of adding systematic
client tracking to supportive consultations. Counselling & Psychotherapy Research, 7, 48 –53.
doi:10.1080/14733140601140873
Yates, C. M. (2012). The use of per session clinical assessment with clients in a mental health delivery system: An
investigation into how clinical mental health counseling practicum students and practicum instructors use
routine client progress feedback (Unpublished doctoral dissertation). Kent State University, Kent, Ohio.
Chad M. Yates is an Assistant Professor at Idaho State University. Courtney M. Holmes, NCC, is an Assistant Professor at Virginia Commonwealth University. Jane C. Coe Smith is an Assistant Professor at Idaho State University. Tiffany Nielson is an Assistant Professor at the University of Illinois at Springfield. Correspondence can be addressed to Chad M. Yates, 921 South 8th Ave, Stop 8120, Pocatello, Idaho, 83201, yatechad@isu.edu.
Jun 26, 2015 | Author Videos, Volume 5 - Issue 3
Tyler Wilkinson, Rob Reinhardt
The use of technology in counseling is expanding. Ethical use of technology in counseling practice is now a stand-alone section in the 2014 American Counseling Association Code of Ethics. The Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act provide a framework for best practices that counselor educators can utilize when incorporating the use of technology into counselor education programs. This article discusses recommended guidelines, standards, and regulations of HIPAA and HITECH that can provide a framework through which counselor educators can work to design policies and procedures to guide the ethical use of technology in programs that prepare and train future counselors.
Keywords: counselor education, technology, best practice, HIPAA, HITECH
The enactment of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) brought forth a variety of standards addressing the privacy, security and transaction of individual protected health information (PHI; Wheeler & Bertram, 2012). According to the language of HIPAA (2013, §160.103), PHI is defined as “individually identifiable health information” (p. 983) that is transmitted by or maintained in electronic media or any other medium, with the exception of educational or employment records. “Individually identifiable health information” is specified as follows:
Information, including demographic data, that relates to:
- the individual’s past, present or future physical or mental health or condition,
- the provision of health care to the individual, or
- the past, present, or future payment for the provision of health care to the individual, and that identifies the individual for which there is a reasonable basis to believe can be used to identify the individual. Individually identifiable health information includes many common identifiers. (U.S. Department of Health and Human Services [HHS], n.d.-b, p. 4)
The HIPAA standards identify 18 different elements that are considered to be part of one’s PHI. These include basic demographic data such as names, street addresses, elements of dates (e.g., birth dates, admission dates, discharge dates) and phone numbers. It also includes information such as vehicle identifiers, Internet protocol address numbers, biometric identifiers and photographic images (HIPAA, 2013, § 164.514, b.2.i).
According to language in HIPAA, the applicability of its standards, requirements and implementation only apply to “covered entities,” which are “(1) a health plan (2) a health care clearinghouse (3) a health care provider who transmits any health information in electronic form in connection with [HIPAA standards and policies]” (HIPAA, 2013, § 160.102). Covered entities have an array of required and suggested privacy and security measures that they must take into consideration in order to protect individuals’ PHI; failure to protect individuals’ information could result in serious fines. For example, one recent ruling found a university medical training clinic to be in violation of HIPAA statutes when network firewall protection had been disabled. The oversight resulted in a $400,000 penalty (Yu, 2013). Moreover, the recent implementation of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009 increased the fines resulting from failure to comply with HIPAA, including fines for individuals claiming they “did not know” that can range from $100–$50,000 (Modifications to the HIPAA Privacy, 2013, p. 5583). The final omnibus ruling of HIPAA–HITECH, enforcing these violations, went into effect on March 26, 2013 (Modifications to the HIPAA Privacy, 2013; Ostrowski, 2014). Enforcement of the changes from the HITECH Act on HIPAA standards began on September 23, 2013, for covered entities (Modifications to the HIPAA Privacy, 2013).
Academic departments and universities must understand the importance of HIPAA and HITECH regulations in order to determine whether the department or university is considered a covered entity. Risk analysis and management need to be employed to avoid violations leading to penalties and fines (HIPAA, 2013, §164.308). Some counselor education programs that have students at medically related practicum or internship sites also may be considered business associates (see HIPAA, 2013, § 160.103) and would need to comply with HIPAA regulations (see HIPAA, 2013, § 160.105). The authors recommend that all counselor education programs confer with appropriate legal sources to understand any risks or liabilities related to HIPAA regulations and relationships with practicum and internship sites. Many states also have their own unique privacy laws that must be considered in addition to those described in HIPAA regulations. The purpose of this article assumes that a counselor education department is not considered a covered entity by the regulations set forth by HIPAA. However, as an increasing number of counselor education programs incorporate the use of digital videos or digital audio recordings, a need for a set of policies and procedures to guide the appropriate use of digital media is evident.
The authors believe that the regulations set forth by HIPAA and HITECH create a series of guidelines that could dictate best practices for counselor educators when considering how to utilize technology in the collection, storage and transmission of any individual’s electronic PHI (Wheeler & Bertram, 2012) within counselor education programs. HIPAA regulations (2013, §160.103) describe electronic protected health information (ePHI) as any information classified as PHI, as described above, either “maintained by” or “transmitted in” (p. 983) electronic media. For example, audio recordings used in practicum and internship courses are often collected electronically by digital recorders. If the recordings remain on the device, this protected information is being maintained in an electronic format. If the data is shared through e-mail or uploaded to a computer, then it is being transmitted in electronic format. As it relates to counselor training, the PHI that is collected could be real or fictitious (i.e., from someone role playing in the program). Though fictitious information is not necessarily protected, encouraging students to engage in implementing a set of policies and procedures guided by regulations of HIPAA and HITECH creates an experiential milieu whereby students become aware of and learn the importance of security and privacy when handling digital ePHI. The authors will discuss throughout this article how specific regulations from HIPAA and HITECH can be utilized to create a set of policies and procedures that guide the ways in which members of counselor education programs can handle any ePHI they encounter during their training. These direct experiences will give faculty and students greater familiarity with current HIPAA and HITECH regulations, thus making them better prepared to work ethically and legally in modern mental health culture.
This article is not meant to cover HIPAA and HITECH regulations in a comprehensive manner. Overviews of these standards have been written concerning the regulations of HIPAA and HITECH regarding the work of mental health practitioners (see Letzring & Snow, 2011). The degree to which the myriad regulations of HIPAA will be implemented in various counselor education programs will need to be decided by the members of individual programs and by necessary stakeholders. The authors hope to introduce a dialogue regarding the thoughtful use of technology in counselor education programs guided by the parameters set forth by HIPAA.
According to the Substance Abuse and Mental Health Services Administration (SAMHSA; 2013), the trend in mental health care treatment spending is in the direction of public (i.e., Medicare and Medicaid) and private insurance growth as a means of payment. Spending for all mental health and substance abuse services totaled $172 billion in 2009; moreover, this spending accounted for 7.4% of all health care spending that year. Additionally, it is projected that spending on all mental health and substance abuse services could reach $238 billion by 2020 (SAMHSA, 2014). However, the rate at which individuals pay out-of-pocket for mental health and substance abuse services is expected to decrease steadily (SAMHSA, 2014). Historical trends show out-of-pocket spending decreased from 18% of all spending in 1986 to 11% in 2009 (SAMHSA, 2013, 2014). It is projected that out-of-pocket spending for mental health treatment will level off to account for approximately 10% of all spending while Medicaid, Medicare, and private insurance will account for approximately 70% of spending (SAMHSA, 2014). The trend toward greater insurance use will increase the number of professional counselors who will be seen as or will be working within organizations that are considered HIPAA-covered entities. Implementing policies and procedures in counseling departments that incorporate some of the HIPAA regulations is a useful way to prepare future professionals for the working environment they will enter (SAMHSA, 2013).
The implementation of the HITECH Act (2009) as a supplement to HIPAA emphasized the need to make sure future counselors understand the importance of the increasing role of technology in the practice of counseling (Lawley, 2012). The HITECH Act established an expectation that professionals in health care must be familiar with technology, specifically as it relates to policies guiding the storage and transmission of ePHI. The objectives of HITECH include “the electronic exchange and use of health information and the enterprise integration of such information” and “the utilization of an electronic health record for each person in the United States by 2014” (HITECH, 2009, §3001.c.A, emphasis added). Additionally, HITECH strengthened the enforcement of penalties for those who violate HIPAA (Modifications to the HIPAA Privacy, 2013). A multi-tiered system of violations allows for civil money penalties to range from $100–$50,000 per violation (Modifications to the HIPAA Privacy, 2013). The American Counseling Association’s (ACA) 2014 Code of Ethics acknowledged the increasing use of technology by professional counselors by introducing a new section (Section H) addressing the ethical responsibility of counselors to understand proper laws, statutes, and uses of technology and digital media. Ethical counselors are expected to understand the laws and statutes (H.1.b), the uniqueness of confidentiality (H.2.b), and the proper use of security (H.2.d) regarding the use of technology and digital media in their counseling practice.
The mental health care system exists inside the broader health care system. As such, graduates of counseling programs must be familiar with HIPAA regulations and the various modes of technology to implement these regulations (ACA, 2014; Lawley, 2012). Students will be expected to understand what security and privacy standards are required of them once they begin working as counseling professionals (ACA, 2014). For example, the movement toward increased use of ePHI across health care will place increasing demands on students to understand how to appropriately keep electronic data private and secure. Counselor educators need to be mindful of how the use of technology in the practice of counseling is being taught and implemented with counseling students. Counselor educators should thoughtfully consider how students will learn the ways in which technology can be used professionally while maintaining ethical and legal integrity (Association for Counselor Education and Supervision [ACES] Technology Interest Network, 2007; Wheeler & Bertram, 2012). Having standards to guide the use of ePHI throughout counselor education programs is a way in which students can become knowledgeable and skilled regarding the laws and ethics surrounding digital media. Policies and procedures should include information guiding the ways in which students collect, store and transmit digital media (e.g., audio recordings or videotapes) while a member of the counseling program. By requiring students to utilize the ePHI (real or fictitious) they collect in accordance with policies and procedures informed by HIPAA and HITECH, students crystallize their understanding of these complicated laws.
HIPAA Compliance and Technology
Complying with HIPAA Privacy and Security Rules requires individuals to be mindful of policies and procedures, known as “administrative safeguards” (HIPAA, 2013, §164.308, p. 1029), and work to implement safeguards consistently. The HHS has made clear that it does not provide any type of credential to certify that an individual, business, software or device is HIPAA compliant (HHS, n.d.-a; Reinhardt, 2013). Complying with HIPAA rules requires organizations and individuals to address many different processes where choice of hardware or software is only one aspect (Christiansen, 2000). Being HIPAA compliant is less about a certification or a credential on a device and more about having a set of policies and procedures in place that ensure the integrity, availability and confidentiality of clients’ ePHI (Christiansen, 2000; HHS, n.d.-b). Hardware and software technology companies who make claims that a product or an educational resource is HIPAA compliant are likely doing so for marketing purposes. Claims of this type are mostly meaningless (HHS, n.d.-a) and would not provide protection in the case of a breach (HITECH, 2009). Being HIPAA compliant is an “organizational obligation not a technical specification” (Christiansen, 2000, p. 7). The distinction is important for educators to understand as they seek to implement technology in counselor education programs. When establishing a set of policies and procedures within a counseling department, the recommendations set forth in describing the security and privacy of PHI in Part 164 of HIPAA (2013) can be an appropriate framework for establishing best practices for counselors and counselor educators. The general requirements in complying with HIPAA security standards are to ensure the confidentiality, integrity and availability of individuals’ ePHI while protecting against any reasonably anticipated threats to the security and privacy of said ePHI (HIPAA, 2013, §164.306.a). The key phrase to consider is that covered entities are asked to protect against any “reasonably anticipated” (HIPAA, 2013, §164.306.a, p.1028) threat. Educators must understand the importance of spending time considering reasonable, foreseeable risks. A primary responsibility is to create administrative safeguards that address any reasonable, foreseeable risks, which the individual, department or covered entity establishes.
Before looking at key aspects of HIPAA Privacy and Security guidelines, key definitions should be understood:
- Administrative safeguards include policies and procedures used to manage the development, selection, implementation and security in protecting individuals’ ePHI (HIPAA, 2013, § 164.304).
- Authentication includes “the corroboration that a person is the one claimed” (HIPAA, 2013, § 164.304, p. 1027).
- Confidentiality defines “the property that data or information is not made available or disclosed to unauthorized persons or processes” (HIPAA, 2013, § 164.304, p. 1027).
- Encryption is “the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without the use of a confidential process or key” (HIPAA, 2013, § 164.304, p. 1027).
- Security incident is described as “the attempted or successful unauthorized access, use, disclosure, modification, or destruction of information or interference with system operation in an information system” (HIPAA, 2013, § 164.304, p. 1027).
HIPAA (2013) standards are categorized as either required or addressable as indicated in Section 164.306.d.1. The rest of this document will highlight the standards that the authors believe shape a set of best practices for counselor educators when implementing technology into their counselor education programs. The degree to which a counseling program decides to implement those standards that are considered required or addressable will be determined by their status as a covered entity, state laws, needs of their counseling program and the financial feasibility of implementing these standards.
Safeguards
HIPAA requires that all covered entities maintain policies and procedures that (1) ensure confidentiality and availability of all electronic PHI, (2) protect against any reasonably (emphasis added) anticipated threats or hazards to the security or integrity of ePHI, (3) protect against any reasonably anticipated uses or disclosures of ePHI, and (4) ensure compliance by the workforce. The following sections will discuss ways in which HIPAA Privacy and Security rules can be utilized as best practices in counselor education programs so that foreseeable risks, threats and vulnerabilities may be minimized. Please note that this interpretation of safeguards is intended for the consideration of counselor education programs that are not covered entities, but may use HIPAA Privacy and Security rules to establish a set of policies and procedures as a means of best practice. (For a sample guide for counselor educators to use in developing policies and procedures, please contact the first author).
Administrative Safeguards
Administrative actions and oversight make up an important component of the language within HIPAA (2013). Administrative safeguards consist of the policies and procedures designed to “manage the selection, development, [and] implementation” (§ 164.304, p. 1027) of the security and privacy of one’s ePHI. This section describes HIPAA standards to consider when establishing administrative safeguards.
Assigned responsibility. A faculty or staff member within the counselor education program should be identified as responsible for the development, oversight and implementation of the policies and procedures for the department. The faculty member needs to be familiar with the privacy and security policies of HIPAA in order to implement the policies and procedures and to facilitate student training in ways that address the specific needs of the program. Developing a relationship with a staff member in the university information technology department may result in collaborative efforts regarding specific procedures for the use of technology within the university.
Risk analysis. Before counselor educators can design a set of policies and procedures to guide appropriate technology use, the foreseeable risks must be analyzed. An accurate and thorough assessment is needed to identify potential risks to the protection and security of ePHI (HIPAA, 2013, §164.308) that is collected, stored and transmitted in the counseling program. Analyzing potential risk is essential to the minimization of potential disasters in the future (Dooling, 2013). HHS (2007) makes clear that it is important to spend time considering reasonably anticipated threats and vulnerabilities and then to implement policies and procedures to address the assessed risks. HIPAA security standards do not state that covered entities should protect against all possibly conceived threats, but those that can be “reasonably anticipated” based upon the technologies employed, work environments and employees of the covered entity. The National Institute of Standards and Technology (NIST; 2012) defines a threat “as any circumstance or event . . . with the potential to adversely impact organization operations . . . through an information system via unauthorized access, destruction, disclosure, or modification of information” (p. B-13). A risk is a measure of the probability of a threat triggering a vulnerability in the procedures that an organization uses to ensure the privacy and security of ePHI (NIST, 2012). Vulnerabilities are technical and non-technical weaknesses, which include limitations in utilized technology or ineffective policies within the organization (HHS, 2007). In counselor education programs, risk analysis may include looking at the threats and vulnerabilities associated with counseling students traveling between their residence, campus, and practicum or internship sites while carrying ePHI. Moreover, the analysis must include assessing the potential risks associated with the transmission and storage of protected information using technological media (e.g., e-mail, personal computers, cloud-based storage, external storage devices).
Risk management. Risk management is the ongoing process of implementing measures to reduce the threats that were determined as a part of the risk analysis (HHS, 2007). Once a counseling program has assessed and identified potential risks associated with the collection, transmission and storage of any identifiable information, it must begin to manage these risks. HHS has provided an example list of steps to assist organizations in conducting risk analysis and risk management (see Table 1). Members of counselor education programs can begin to incorporate programmatic policies and procedures that address how media containing ePHI should be handled by members of the program. The previously mentioned document (available from the first author) provides sample policies and procedures developed to serve as a guide for counseling programs. Many counselor education programs utilize student handbooks that detail policies related to the academic and professional expectations of students enrolled in their program. Incorporating an additional set of policies to address the treatment of ePHI is a seamless way to begin managing the risks of technology use in mental health. By implementing policies and procedures across the curriculum, students become increasingly knowledgeable and skilled at handling ePHI in an ethical manner.
Table 1
Example Risk Analysis and Risk Management Steps
|
Risk Analysis
|
| 1. |
Identify the scope of the analysis. |
| 2. |
Gather data. |
| 3. |
Identify and document potential threats and vulnerabilities. |
| 4. |
Assess current security measures. |
| 5. |
Determine likelihood of threat occurring. |
| 6. |
Determine potential impact of threat occurrence. |
| 7. |
Determine level of risk. |
| 8. |
Identify security measures and finalize documentation. |
|
Risk Management
|
| 1. |
Develop and implement a risk management plan. |
| 2. |
Implement security measures. |
| 3. |
Evaluate and maintain security measures. |
Note. Adapted from “Basics of Risk Analysis and Risk Assessment,” by the U.S.
Department of Health and Human Services, 2007, HIPAA Security Series, 2(6), p. 5.
Sanction policy. It must be communicated to all members of counselor education programs that failure to comply with the policies will result in sanctions. HIPAA (§164.308, 2013) requires organizations to enforce sanctions against individual members for failing to comply with their organization’s policies and procedures. A counselor education program should have clearly documented policies and procedures for students and staff involved with the facilitation of ePHI. The language of HIPAA makes no attempt to clarify as to what these sanctions should entail; however, language needs to exist that addresses individuals’ failure to comply. For counseling students, a potential option is to consider a tiered sanction policy similar to that of the structure established by the HITECH Act (Modifications to the HIPAA Privacy, 2013) and § 1176 of the Social Security Act (2013). Varying categories of violations from “did not know” (p. 5583) to uncorrected–willful neglect result in increasingly severe fines (Modifications to the HIPAA Privacy, 2013). Since this experience is most likely educational for students, varying degrees of failure to comply could exist. For counselor education programs, this language also could easily be tied to student remediation processes that many counseling programs utilize.
Information review. Ongoing review of the activity of students, faculty and staff that involves the creation, storage and transmission of ePHI is a required safeguard according to HIPAA standards (2013, §164.308). As an educational unit, it is understandable that individuals might make mistakes regarding the implementation of HIPAA safeguards. A regular review of the activity and records of the individuals whose ePHI are being collected is important. It is required for organizations to have policies in place for recording system activity, including access logs and incident reports (§ 164.308). Additionally, protections must be in place to ensure that only those individuals who should have access to any ePHI are able to access this protected information. In the case of the sanctioned university medical training clinic cited earlier, the breaches might have been avoided with an ongoing review of the system’s firewall settings (Yu, 2013). Monitoring and developing policies regarding information review may require developing relationships and discussions with the appropriate information technology personnel at the organization.
Response, recovery and reporting plan. HIPAA regulations require that a covered entity have a plan in place should ePHI be breached or disclosed to an unauthorized party (HIPAA, 2013, § 164.308). When developing departmental policies and procedures, it is important to have such a plan in place. Whether the breach or disclosure is intentional or unintentional, each individual whose information has potentially been compromised needs to be notified. Moreover, in cases where more than 500 individuals’ PHI have been breached, the entity may need to report this information to local media or to HHS (HIPAA, 2013, §164.406–164.408). It should be noted that covered entities could be exempted from breach notification through employing security techniques such as encryption (Breach Notification, 2009; HIPAA, 2013, §164.314). The regulations of HIPAA require that a plan be in place to address emergencies (HIPAA, 2013, §164.308). In the case of theft, emergency or disaster, counseling departments need a data backup and recovery plan in place to retrieve ePHI.
Physical Safeguards
Establishing policies and procedures that protect against unauthorized physical access and damage from natural or environmental hazards is critical to maintaining the security and privacy of PHI (HIPAA, 2013, §164.310).
Access control. When using technology to store and transmit ePHI, the recommendation is that policies address ways in which physical access to protected information will be limited. For example, many counseling departments now incorporate the use of digitally recorded data from counseling sessions (e.g., audio or video). Policies need to clearly address how to best limit physical access to these recordings. Students need to understand what it means to keep data physically secure. The HITECH Act (Modifications to the HIPAA Privacy, 2013) includes the category “did not know” as a punishable violation. Students need to understand the consequences of failing to implement such physical safeguards. For example, keeping devices stored under lock and key when not in use is just one important step in moving toward a set of best practices. Many universities already require students to utilize login information with a username and passcode in order to access computers affiliated with their respective university. Consideration may need to be given regarding policies and procedures for accessing ePHI off campus, where the technical security may be less controlled.
Disposal and re-use. HIPAA requires covered entities to implement policies that address the disposal and re-use of ePHI on electronic media. A detailed discussion of the various types of disposal, also known as media sanitization, and re-use is beyond the scope of this article (see Kissel, Regenscheid, Scholl, & Stine, 2014). Counselor education programs must recognize the importance of properly removing protected information from media devices after it is no longer required. Media sanitization is a critical element in assuring confidentiality of information (Kissel et al., 2014). For example, in counseling internship courses, students may be asked to delete recorded sessions during the last day of classes so that the instructor can have evidence of the appropriate disposal of this information. NIST identifies four different types of media sanitization: disposal, clearing, purging and destroying (Kissel et al., 2014). The decision as to which type of media sanitization is appropriate requires a cost/benefit analysis, as well as an understanding of the available means to conduct each type of sanitization. (The authors recommend counseling departments consult with an individual from the university information technology department).
Technical Safeguards
The language in HIPAA is clear regarding the implementation of technical safeguards, requiring that access to electronic media devices containing PHI be granted only to those who need such access to perform their duties.
Unique user identification. If a device allows for unique user identification, one should be assigned to minimize the unintended access of ePHI. HIPAA standards (2013, §164.514) state that an assigned code should not be “derived from or related to information about the individual” (p. 1064).
Emergency access. Covered entities are required to have procedures in place that allow ePHI to be accessed in the event of an emergency (HIPAA, 2013, §164.310). The procedures can be addressed within counselor education programs so as to ensure that the student and the supervisor have access to the ePHI at the designated storage location.
Encryption. Encryption is a digital means of increasing the security of electronic data. Using an algorithmic process, the data is scrambled so that the probability of interpretation is minimal without the use of a confidential key to decode the information. Though the language of HIPAA categorizes encryption as addressable rather than required, the implementation of encryption policies is a best practice to help ensure the protection of ePHI. The language of HIPAA makes it clear that an “addressable” item must be implemented if it is “reasonable and appropriate” (HIPAA, 2013, §164.306, p. 1028) to do so. Huggins (2013) has recommended that ePHI be stored on drives that allow for “full disk encryption” at a minimum strength of 128 bits. With the availability of many different types of software packages that can encrypt at a recommended strength, implementing encryption standards in a counseling department is affordable and reasonable. Most modern computer operating systems have options to encrypt various drives built into the functionality of the system. Full disk encryption is recommended because of its higher level of security and also because it can provide exemption from the Breach Notification Rule mentioned earlier (Breach Notification, 2009). In case of a breach, the burden is on the covered entity to prove that the ePHI was not accessed; otherwise, Breach Notification Rules must be followed. The assumption is that if a disk is fully encrypted, even if accessed by an unauthorized person, it is highly unlikely that an unauthorized party will obtain access to the ePHI (Breach Notification, 2009). The authors strongly encourage the use of encrypted devices as a standard policy for the collection and storage of ePHI (see Scarfone, Souppaya, & Sexton, 2007). The policy creates greater protection against the accidental disclosure of an individual’s ePHI. Additionally, organizations that use commercial cloud storage service providers should investigate whether these providers are willing to sign a Business Associate Agreement, in which the provider agrees to adhere to regulations of HIPAA (2013, §160.103). If not, the storage of ePHI may not be in alignment with HIPAA standards.
Disk encryption works well for the storage and collection of protected information while at rest (Scarfone et al., 2007); however, counselor education programs also should consider assessing the risk associated with the transmission of ePHI (HIPAA, 2013, §164.312). Protected information often remains encrypted while at rest, yet becomes unencrypted while in transmission. Programs need to “guard against unauthorized access to electronic PHI that is being transmitted over an electronic communication network” (HIPAA, 2013, §164.312, p. 1032). Commonly used e-mail systems, for example, often do not transmit information in an encrypted state. Assessment of the risks in sending protected information by an unsecured means should be conducted.
Discussion
The language of HIPAA allows each covered entity some leeway in how it wants to implement policies. However, HIPAA standards (2013, §164.316) are very clear that entities should “implement reasonable and appropriate policies”(p. 1033) that include administrative, physical and technical safeguards that reasonably and appropriately protect the confidentiality, integrity and availability of electronic PHI that it creates, receives, maintains or transmits. The implementation of HITECH (2009) and the meaningful use policies of the Affordable Care Act (Medicare and Medicaid Programs, 2014) emphasized the movement of the broader health care system toward increasing use of health care technology such as Electronic Health Records. Students graduating from counseling programs find themselves working in myriad settings, many of which are considered covered entities as defined in the HIPAA standards (2013, §160.103). It is imperative for counselor educators to recognize the trend toward increased technology use in the health care market and to consider ways that technology can be infused into counselor education so that students are entering the workforce with greater technological competence. Specifically, counselor educators have an imperative to teach the ethical and legal technological mandates that exist as they relate to regulations of HIPAA (2013) and HITECH (2009) so as to create competent counselors. As the health care industry continues to incorporate more technology, counselor educators must stay informed regarding ways in which graduates will utilize this technology in their professional careers.
Recommendations for Counselor Educators
ACES (2007) published a document that recommends guidelines for infusing technology into counselor education curriculum, research and evaluation. This document provides a basic overview by which programs should guide the very broad use of technology in counseling programs. Technology is presented as a useful enhancement or supplement to practice. The shift in the broader health care culture has moved technology from a supplementary role into one in which it is primary to the ongoing success of a practitioner. The authors believe that counselor educators can utilize HIPAA and HITECH regulations to continue to infuse technology into counselor education programs, and recommend the following:
- Counselor educators need to increase the importance placed on technology in counselor education programs. The movement of technology into increasingly primary roles in health care is indicative of the need for it to become a primary focus during the education and training of counselors. Counselors and counselor educators must stay abreast of the trends and developments regarding health care law and technology. The implementation of Section H, “Distance Counseling, Technology, and Social Media,” in the 2014 ACA Code of Ethics also is indicative of this need. The counseling profession needs to increase the research, education and training available to counselors and counselor educators.
- Counselor educators need to have policies and procedures in place guiding the use of technology in their departments. The overview of HIPAA regulations will help provide guidelines for developing a set of policies and procedures. All policies and procedures must be in writing and accessible to students, faculty and staff who have access to any ePHI. Many counseling programs maintain a student handbook in which a set of standards that dictate the use of technology could easily be incorporated. Departmental policies should be in place that dictate the consequences should an individual fail to adhere to the stated policies and procedures.
- Counselor educators should be actively seeking ways in which technology and HIPAA can be incorporated to best prepare students for their future work environment. The regulations and language of HIPAA and HITECH should be addressed in course activities. Are counseling students getting opportunities to become familiar with Electronic Health Records? Are students having opportunities to write and store notes electronically? Have students addressed the ethical and legal concerns related to the use of technology in practice? Do students understand what it means to maintain encrypted files or how to appropriately de-identify ePHI? Do students understand how to submit health insurance claims electronically? Questions like these are necessary for students to understand so they can be prepared to work in the current mental health environment as competent professionals.
The use of technology in counseling is moving from a secondary to a primary place in counselor education. The expectation that students can find this information after graduation in the form of a workshop is no longer acceptable. The shifts in the language of HIPAA and HITECH have moved the broad health care field in an electronic, digital direction. The familiarity with technology seems to be growing toward a core competency of counselor education programs and faculty. The laws dictated by HIPAA and HITECH provide a framework by which counselor educators can continue to infuse technology into the classroom and clinical experiences.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
American Counseling Association. (2014). ACA code of ethics. Alexandria, VA: Author.
Association for Counselor Education and Supervision Technology Interest Network. (2007). Technical competencies for counselor education: Recommended guidelines for program development. Retrieved from http://www.acesonline.net/sites/default/files/2007_aces_technology_competencies.pdf
Breach Notification for Unsecured Protected Health Information, 74 Fed. Reg. 162 (August 24, 2009) (to be codified at 45 CFR §§ 160 & 164).
Christiansen, J. (2000). Can you really get “HIPAA Compliant” software and devices? IT Health Care Strategist, 2(12), 1, 7–8.
Dooling, J. A. (2013). It is always time to prepare for disaster. Journal of Health Care Compliance, 15(6), 55–56.
Health Information Technology for Economic and Clinical Health (HITECH) Act, Title XIII § 13001 of Division A of the American Recovery and Reinvestment Act of 2009 (AARA), Pub. L. No. 111-5 (2009).
Health Insurance Portability and Accountability Act (HIPAA), 45 CFR §§ 160, 162, & 164 (2013). Retrieved from http://www.gpo.gov/fdsys/pkg/CFR-2013-title45-vol1/pdf/CFR-2013-title45-vol1-chapA-subchapC.pdf
Huggins, R. (2013, April 5). HIPAA “safe harbor” for your computer (the ultimate in HIPAA compliance): The compleat [sic] guide [Blog post]. Retrieved from http://www.personcenteredtech.com/2013/04/hipaa-safe-harbor-for-your-computer-the-ultimate-in-hipaa-compliance-the-compleat-guide/
Kissel, R., Regenscheid, A. Scholl, M., & Stine, K. (2014). Guidelines for media sanitization (NIST Publication No. 800-88, Rev. 1). Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf
Lawley, J. S. (2012). HIPAA, HITECH and the practicing counselor: Electronic records and practice guidelines. The Professional Counselor, 2, 192–200. doi:10.15241/jsl.2.3.192
Letzring, T. D., & Snow, M. S. (2011). Mental health practitioners and HIPAA. International Journal of Play Therapy, 20, 153–164. doi:10.1037/a0023717
Medicare and Medicaid Programs; Modifications to the Medicare and Medicaid Electronic Health Record (EHR) Incentive Program for 2014 and Other Changes to the EHR Incentive Program; and Health Information Technology: Revisions to the Certified EHR Technology Definition and EHR Certification Changes Related to Standards Final Rule, 79 Fed. Reg., 179 (September 4, 2014) (to be codified at 45 CFR pt. 170).
Modifications to the HIPAA Privacy, Security, Enforcement, and Breach Notification Rules Under the Health Information Technology for Economic and Clinical Health Act and the Genetic Information Nondiscrimination Act; Other Modifications to the HIPAA Rules; Final Rule, 78 Fed. Reg., 5566 (January 25, 2013) (to be codified at 45 CFR pts. 160 and 164).
National Institute of Standards and Technology. (2012). Guide for conducting risk assessments (NIST Special Publication No. 800-30, Rev. 1). Retrieved from http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf
Ostrowski, J. (2014). HIPAA compliance: What you need to know about the new HIPAA-HITECH rules. Retrieved from http://www.nbcc.org/assets/HIPAA_Compliance.pdf
Reinhardt, R. (2013, October 3). Your software and devices are not HIPAA compliant [Blog post]. Retrieved from http://www.tameyourpractice.com/blog/your-software-and-devices-are-not-hipaa-compliant
Scarfone, K., Souppaya, M., & Sexton, M. (2007). Guide to storage encryption technologies for end user devices: Recommendations of the national institute of standards and technology (NIST Special Publication No. 800-111). Retrieved from http://csrc.nist.gov/publications/nistpubs/800-111/SP800-111.pdf
Social Security Act, 42 U.S.C. § 1176 (a)(1). (2013). Retrieved from http://www.ssa.gov/OP_Home/ssact/title11/1176.htm
Substance Abuse and Mental Health Services Administration. (2013). National expenditures for mental health services & substance abuse treatment, 1986–2009 (HHS Publication No. SMA-13-4740). Retrieved from http://store.samhsa.gov/shin/content//SMA13-4740/SMA13-4740.pdf
Substance Abuse and Mental Health Services Administration. (2014). Projections of national expenditures for treatment of mental and substance use disorders, 2010–2020 (HHS Publication No. SMA-14-4883). Retrieved from http://store.samhsa.gov/shin/content//SMA14-4883/SMA14-4883.pdf
U.S. Department of Health and Human Services. (n.d.-a). Be aware of misleading marketing claims. Retrieved from http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/misleadingmarketing.html
U.S. Department of Health and Human Services. (n.d.-b). Summary of the HIPAA privacy rule. Retrieved from http://www.hhs.gov/ocr/privacy/hipaa/understanding/summary/privacysummary.pdf
U.S. Department of Health and Human Services (HHS). (2007). Basics of risk analysis and risk management. Retrieved from http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/riskassessment.pdf
Wheeler, A. M. N., & Bertram, B. (2012). The counselor and the law: A guide to legal and ethical practice (6th ed.). American Counseling Association: Alexandria, VA.
Yu, E. H. (2013). HIPAA privacy and security: Analysis of recent enforcement actions. Journal of Health Care Compliance, 15(5), 59–61.
Tyler Wilkinson, NCC, is an Assistant Professor at Mercer University. Rob Reinhardt, NCC, is in private practice in Fuquay-Varina, NC. Correspondence may be addressed to Tyler Wilkinson, 3001 Mercer University Drive, AACC 475, Atlanta, GA 30341, Wilkinson_rt@mercer.edu.
Feb 6, 2015 | Volume 5 - Issue 1
Cheryl Neale-McFall, Christine A. Ward
The relationship between doctoral students and their chairpersons has been linked to students’ successful completion of their dissertations and programs of study. When students fail to complete their degrees, there is a rise in attrition rates, and both programs and students suffer. The current study, based on a survey developed by the first author, was based on previous literature and themes generalized from a qualitative pilot study of recent counseling doctoral graduates regarding the selection of a dissertation chairperson. The purpose of this study was to examine factors used by students to select their chairperson and behaviors exhibited by chairpersons as predictors of overall student satisfaction with their dissertation chairperson. One-hundred thirty-three counselor education doctoral students participated in this study. Results suggest that specific selection criteria and chairperson behavior components significantly predict counseling doctoral students’ overall satisfaction with their dissertation chairpersons.
Keywords: counselor education, chairperson, attrition, dissertation, student satisfaction
The process of successfully completing a doctoral program depends upon a variety of factors. One key component of degree completion hinges on the dissertation process. Students, faculty, departments and the university as a whole are affected when doctoral students fail to complete their degrees (Council of Graduate Schools, n.d.-b; Garcia, Malott, & Brethower, 1988; Gardner, 2009; Goulden, 1991; Kritsonis & Marshall, 2008; Lenz, 1997; Lovitts, 2001). In the United States, doctoral attrition rates have been measured at 57% across disciplines (Council of Graduate Schools, n.d.-a). More recently, data have shown that attrition rates are declining in most doctoral programs; however, those in the field of humanities continue to stall (Jaschik, 2007). Many students fall short of completing the dissertation or take much longer than expected to complete the dissertation due to a lack of supervision or mentorship (Garcia et al., 1988). In a meta-synthesis of 118 studies on doctoral attrition, the most frequent finding was that degree completion is related to the amount and quality of contact between doctoral students and their chairperson (Bair & Haworth, 2004).
Mentoring Relationships
Mentoring relationships are essential to doctoral education and contribute to timely dissertation completion (Council of Graduate Schools, n.d.-b; Garcia et al., 1988; Lovitts, 2001). Casto, Caldwell, and Salazar (2005) examined the importance of mentoring relationships between counselor education students and faculty members. They discussed the benefits of having a counselor education mentor to assist with co-teaching, carrying out research activities, and enhancing professional competence and identity development. Kolbert, Morgan, and Brendel (2002) also noted that counselor education doctoral students benefit from faculty mentors who guide students through interactive tasks such as supervision, research, co-teaching, administration, advising and helping new graduates find employment. Although the types of interactions between doctoral students and their faculty chairperson have been documented, the relative influences of these interactions on the overall student–chairperson relationship remain unclear.
Selection and Behaviors
Chairperson behaviors and the criteria used by doctoral students to select their chairperson influence student relationship satisfaction and degree completion (Goulden, 1991; Lovitts, 2001). Lovitts (2001) found that the amount of time faculty spent interacting with students, the location of interactions (formal vs. informal settings), and the quantity of work and social interactions with students all influenced doctoral students’ satisfaction with their chairperson. In addition, participants in the study who failed to complete their doctoral degree were six times more likely to have been assigned a chairperson rather than to have chosen a chairperson. Furthermore, students who completed their degrees were cited as feeling much more satisfied with their advisors than students who did not complete theirs.
Wallace (2000) researched meaningful student–chairperson relationships and the process by which students are assigned or select a chairperson, and found that previous interactions, personality matching and similar research interests were the three most common factors of meaningful relationships in the dyads. Smart and Conant (1990) conducted a qualitative study examining faculty members’ perceptions of key factors that doctoral students should consider when selecting a chairperson. The top suggestions were for someone with similar research interests, someone with a thriving reputation for publishing and someone well educated in methodology (Smart & Conant, 1990). Although this combination can equal success for some doctoral students, researchers also have identified other variables that contribute to a successful student–chairperson relationship. For example, Bloom, Propst Cuevas, Hall, and Evans (2007) accumulated letters of nomination for outstanding advisors. Five overarching behaviors of outstanding advisors included the following: demonstrating genuine care for students, being accessible, acting as a role model in professional and personal matters, individually tailoring guidance, and proactively integrating students into the profession (Bloom et al., 2007). Emerging themes centered on the importance of support and nurturing rather than on the research background or reputation of the chairperson.
Zhao, Golde, and McCormick (2007) set out to examine how selection of a chairperson and chairpersons’ behaviors affect doctoral student satisfaction, noting that the process by which students and chairpersons come together is relatively unexplored. Data for the study were gathered from a national survey of advanced doctoral students across 11 disciplines at 27 leading doctorate-producing universities with over 4,000 student participants. The four broad discipline areas included humanities, social sciences, physical sciences and biological sciences. Results revealed differences among disciplines for selection, behaviors and satisfaction. For the humanities and social sciences, categories under which counselor education falls, academic advising contributed most to student satisfaction. Cheap labor, which was more of a factor in physical and biological sciences, was least important for humanities and social science students. Further, humanities students noted that intellectual compatibility and advisor reputation were most influential in selecting a chairperson, while potential pragmatic benefit resulting from working with the chairperson was rated unfavorably. Results suggest that overall satisfaction with the advising relationship, especially in the humanities, is positively correlated with advisor choice and advisor behaviors (Zhao et al., 2007).
Research indicates that the relationship between the doctoral student and the chairperson is a key element in determining the student’s success in completing his or her degree (Bloom et al., 2007). Much of the previous research in the area of assessing behaviors has been conducted in a qualitative manner in order to give voice to the participants. All of these studies have been informative across disciplines; however, researchers have acknowledged that “a limited amount of research focusing on counselor education doctoral students has been conducted” (Protivnak & Foss, 2009, p. 240).
Purpose of the Study
The purpose of this study was to determine which variables are most influential in predicting counseling doctoral students’ and recent graduates’ overall satisfaction with their dissertation chairperson. Throughout the literature, terms such as advisor, chair and chairperson have been utilized; for the purpose of this study, the term chairperson is used. The research questions for this study included the following: (a) What selection criteria, if any, predict doctoral students’ and recent graduates’ overall satisfaction with their chairperson? and (b) What chairperson behaviors, if any, predict doctoral students’ and recent graduates’ overall satisfaction with their chairperson?
Method
Participants and Procedures
Counselor education doctoral students who had successfully proposed their dissertation and counselor education graduates who had defended their dissertation within 24 months of the date of the study were invited to participate. A survey instrument, designed by the first author using previous literature and a qualitative grounded theory pilot study, was posted on SurveyMonkey. Emails were distributed to CACREP-accredited department chairs and an invitation to participate was posted on CESNET, the counselor education listserv. The number of potential participants who fit the above criteria is unknown. A priori power analysis was conducted to determine the number of participants needed. Assuming a medium effect size of .05 at Power = .80, 91 participants were needed to successfully complete the survey (Cohen, 1992). After an 8-week period, 133 participants completed the survey, with 122 protocols valid and used for analysis.
Participant characteristics. Demographic information from the 122 participants was summarized and examined. Ages ranged from 26–63 years, with a mean age of 37. Ninety-one participants identified as female, 29 as male and one as transgender, and one declined to answer. The majority of participants identified as White (72 %) or African American (18%), with a small percentage identifying as Asian American (1.6%), Hispanic (2.5%), Native American (1.6%), and biracial (1.6%). Of the 122 participants, 42% were counselor education graduates and 58% were counselor education doctoral candidates. Lastly, 107 (88%) participants indicated that they had selected their chairperson and 15 (12%) indicated that their chairperson had been assigned to them.
Instrumentation
The survey instrument, developed in a qualitative pilot study, consisted of four sections: demographic items, participant selection criteria (e.g., is doing research similar to my dissertation topic), chairperson behaviors (e.g. provided effective feedback on my dissertation work) and participants’ overall satisfaction with their dissertation chairperson (e.g. overall, how satisfied were you with your dissertation chairperson?). An informed consent agreement appeared at the beginning of the survey and participants were required to confirm their consent in order to proceed to the overall survey.
Item generation. Survey items were developed based on the aforementioned qualitative pilot study. Grounded theory and axial coding were used to derive key themes used in conjunction with prominent themes from existing literature (Bair & Haworth, 2004; Gardner, 2009; Goulden, 1991; Kritsonis & Marshall, 2008; Lovitts, 2001; Zhao et al., 2007) in order to develop survey instrument items for the major constructs. These constructs were as follows: selection criteria used by doctoral students when choosing a dissertation chairperson (selection criteria); behaviors exhibited by the chairperson throughout the dissertation process (behaviors); and doctoral students’ satisfaction with their dissertation chairperson (satisfaction). Multiple survey questions were developed for each prominent theme in order to ensure comprehensiveness of each construct (DeVellis, 2003).
Content validity. The final instrument consisted of 62 items. The initial list of items was sent to a panel of counselor educators who had recently (within the last 5 years) completed their doctoral dissertation in a CACREP-accredited counseling program, for the purpose of ensuring the appropriateness of the items for the study. Changes were made, which included adding one demographic question, changing the wording on two selection items and removing one chairperson behavior item deemed redundant.
Data Analysis
Data screening. Surveys were assessed to identify incomplete responses. Eleven cases were removed, leaving a total of 122 valid surveys (N = 122). All variables showed less than 5% of missing values; therefore the listwise default was used. Linearity and normality were examined and variables did not violate assumptions.
A principal component analysis (PCA) was performed in order to appropriately group individual survey items into subscales for each of the constructs. Scree plots, eigenvalues and communalities were examined to determine the appropriate factor structure for the instrument’s subscales. The final PCA for selection criteria revealed four components, with an alpha reliability of .79 and 53% of variance accounted for within the four components (success/reputation, research/methodology, collaborative style, obligation/cultural). Component titles were chosen based on the questions that loaded into each component (see Appendix A for selection criteria components, items and loadings within each component). The final PCA for chairperson behaviors revealed five components, with an alpha reliability of .94 and 67% of variance accounted for within the five components (work style, personal connection, academic assistance, mentoring abilities and professional development; see Appendix B for chairperson behavior components, items and loadings within each component).
Data Analysis
Separate multiple regression analyses were conducted in order to predict doctoral students’ and recent graduates’ overall satisfaction with their chairperson. Selection criteria and behavior components identified in the PCAs were used as the predictor variables. Multiple regressions were conducted to investigate which selection criteria and which chairperson behaviors were most influential in predicting participants’ overall satisfaction with their chairperson. In regard to selection criteria, 15 participants stated that they were assigned to a chairperson and therefore were eliminated from this portion of the analysis, leaving 107 eligible participants. Prior to the regression, grouped quantitative variables were examined by testing Mahalanobis’ distance to screen for multivariate outliers. Within selection criteria, three cases exceeded the chi-square critical value, and for satisfaction items, one case exceeded the chi-square critical value, leaving a valid pool of 103 participants. Within chairperson behaviors, seven cases exceeded the chi-square critical value, and for satisfaction items, one case was found that exceeded the chi-square critical value, leaving a valid pool of 114 participants.
Results
Analyses focused on selection criteria and chairperson behaviors as predictors of counselor education doctoral students’ satisfaction with their dissertation chairperson. Regression results for selection criteria indicated that the overall model significantly predicted overall satisfaction, R² = .251, R²adj = .219, F(4,98) = 7.87, p ≤ .001. This model accounted for 25.1% of the variance in overall satisfaction. Review of the regression coefficients indicated that only one component, collaborative style, significantly contributed to the final model (β = .445, t(101) = 4.58, p ≤ .001; see Table 1).
Table 1
Rank Order for Selection Criteria
|
Component
|
Rank
|
b
|
SE
|
β
|
Partial r
|
t
|
p
|
| Collaborative style |
1
|
.376
|
.082
|
.445
|
0.43
|
4.56
|
.000* |
| Success/reputation |
2
|
.058
|
.077
|
.084
|
0.08
|
0.75
|
.457 |
| Research/methodology |
3
|
.046
|
.078
|
.060
|
0.06
|
0.58
|
.560 |
| Obligation/culture |
4
|
-.027
|
.095
|
-.026
|
-0.03
|
-0.28
|
.779 |
* p ≤ .001
Regression results for chairperson behaviors indicated that the overall model significantly predicted overall satisfaction, R² = .720, R²adj = .707, F(5,107) = 55.10, p ≤.001. This model accounted for 72 % of the variance in overall satisfaction. Review of the regression coefficients indicated that two components, work style (β = .390, t(111) = 4.96, p ≤ .001) and personal connection (β = .456, t(111) = 6.19, p ≤ .001) significantly contributed to the final model. See Table 2.
Table 2
Rank Order for Chairperson Behaviors Criteria
|
Component
|
Rank
|
b
|
SE
|
β
|
Partial r |
t
|
p
|
| Personal connection |
1
|
.498
|
.080
|
.456
|
0.51
|
6.19
|
.000* |
| Work style |
2
|
.327
|
.075
|
.390
|
0.43
|
4.96
|
.000* |
| Mentoring abilities |
3
|
.089
|
.082
|
.089
|
0.11
|
1.10
|
.276 |
| Academic assistance |
4
|
.029
|
.093
|
.020
|
0.03
|
0.31
|
.757 |
| Professional development |
5
|
.010
|
.053
|
.012
|
0.02
|
0.18
|
.856 |
* p ≤ .001
Because both regression models in research questions one and two were significant, a third regression was conducted in order to assess both the selection criteria components and the behavior components in predicting overall satisfaction with the participants’ chairperson. The intent of this analysis was to show a possible interaction between the two separate constructs when predicting overall satisfaction. For this analysis, stepwise regression was used based on the previous regression results. Components were entered based on significant contribution by assessing each component’s beta value. The components were entered in the following order: personal connection, collaborative style, work style, mentoring abilities, success/reputation, research/methodology, obligatory, academic assistance and professional development. Results from the regression indicate that two behavior components, work style and personal connection, and one selection component, success/reputation, accounted for 72.7% of the variance for the dependent variable, overall satisfaction, and contributed significantly to the model. See Table 3.
Table 3
Chairperson Behaviors and Selection Criteria Model Summary
|
R
|
R²
|
R²adj
|
∆R²
|
Fchg
|
p
|
df1
|
df2
|
| Model 1 |
.770
|
.593
|
.589
|
.593
|
138.52
|
.000
|
1
|
95
|
| Model 2 |
.846
|
.715
|
.709
|
.122
|
40.14
|
.000
|
1
|
94
|
| Model 3 |
.853
|
.727
|
.719
|
.012
|
4.23
|
.043
|
1
|
93
|
Note. Model 1 = work style; Model 2 = work style and personal connection; Model 3 = work style, personal connection and success/reputation.
Discussion
The present study was conducted in order to better understand which variables best predict satisfaction in the relationship between counseling doctoral students and their dissertation chairperson. Specifically, the study was designed to address gaps in the literature regarding selection criteria and chairperson behaviors as predictors of satisfaction among counselor education doctoral students.
The authors sought to understand the extent to which selection criteria predict doctoral students’ overall satisfaction with their chairperson. Results from the regression analysis suggest that collaborative style significantly contributes to overall satisfaction with one’s dissertation chairperson. There are four items within the component of collaborative style, which include the following: work ethic, personality match, previous work with faculty member and faculty member willing to serve as chairperson. Results suggest that doctoral students’ perception of their ability to collaborate with their chairperson is most influential in predicting overall satisfaction in the relationship between the two. The items within this component seem to share a sense of alignment between the student and professor that focuses more on internal compatibilities, such as similar work ethic and similar personality styles, as opposed to external similarities and benefits, such as a focus on similar research interests or receiving a beneficial recommendation letter. Although there is limited research on how and why doctoral students select their dissertation chairperson, the findings from the present study support those of Wallace (2000), who found that both previous interactions and personality match are among the top themes for why doctoral students select their dissertation chairperson.
The second research question explored which chairperson behaviors best predict overall satisfaction with one’s chairperson. Results from the regression suggest that two components, work style and personal connection, significantly predict overall satisfaction, and the model containing the two components contributed over 71% of the variance in overall satisfaction. Work style includes items such as the following: spoke in “we” vs. “you” statements, provided appropriate structure, held me accountable and on track, provided effective feedback, and discussed expectations prior to the working relationship. Items within the personal connection component included the following: personable and comfortable to be around, used humor in our interactions, advocated for me with others, was patient with my progress, and was invested in me as a professional. The chairperson behavior components that were found to significantly contribute to students’ overall satisfaction with their chairperson seem to center on personal, mentoring and validating behaviors shown by chairpersons as perceived by students. The other components, which include more external assistance (such as building professional relationships, assisting with career possibilities, and providing articles and tips for conducting research), were not found to significantly predict overall satisfaction. Current findings support previous research indicating that students feel more comfortable and more satisfied when expectations are shared and discussed up front (Friedman, 1987; Golde, 2005; Goulden, 1991). In addition, the current findings uphold previous research showing that students are more satisfied with their chairperson when the chairperson displays genuine care and regard for the student (Bloom et al., 2007). However, results from the present study conflict with Zhao et al.’s (2007) findings, which showed that humanities and social science students identified academic advising as the most important factor in a satisfactory advising relationship. Although the current study’s work style component includes some items that reflect academic advising functions, most academic advising roles fall under the present study’s professional development and academic assistance components. Neither of these two components significantly predicted overall satisfaction in the present study.
As a follow-up to research questions one and two, a subsequent multiple regression analysis was conducted. The predictor variables included the four selection criteria components and the five chairperson behavior components. Results from the regression model suggest that three components, work style (behavior component), personal connection (behavior component) and success/reputation (selection component) together contributed 72% of the variance explained in overall satisfaction. The same two components from chairperson behaviors (work style and personal connection) ended up in both the combined regression and the individual regression (research question two), but their beta weights were reversed, indicating that when selection criteria and behaviors are combined, work style contributes more to overall satisfaction than personal connection. For the selection criteria component, success/reputation did not prove to be significant in the individual regression analysis (research question one), but was significant in the combined regression analysis. This finding could be due to the fact that the items within the success/reputation component are more closely related to external behaviors, which seem to match more consistently with chairperson behaviors such as providing effective feedback and providing a good amount of structure. Interestingly, when the selection criteria components were entered without the chairperson behaviors components, only collaborative style seemed to predict overall satisfaction; however, success/reputation predicted overall satisfaction when combined with chairperson behaviors. Previous research (Smart & Conant, 1990; Zhao et al., 2007) indicated that several of the selection items included in the success/reputation component are valuable factors to consider when selecting a chairperson; however, in the findings of the current study, these selection criteria only seem to play a significant role when combined with chairperson behavior components. Further, although the success and reputation of one’s chairperson may be an important factor for selecting a chairperson, it does not appear that the chairperson’s success and reputation contributes to a satisfactory relationship between student and chairperson.
Limitations
One of the primary limitations of this study is the use of a researcher-developed survey instrument as the sole measure of selection criteria, chairperson behaviors and overall satisfaction. Because the purpose of the study was not to establish the psychometric properties of the survey, it is difficult to gauge the reliability and validity of the survey with any certainty. Although both the selection criteria construct and the chairperson behavior construct revealed high alpha reliabilities (.79 and .94, respectively), additional research would have to be conducted in order to establish the overall psychometric properties of the survey.
Another limitation was the inclusivity of the sample. Initially, participants were to be recruited using emails sent by CACREP-accredited department chairs to eligible past and present doctoral students; however, due to a lack of responses, the survey request was opened up to CESNET, a counselor educator listserv. Within both forms of participant recruiting, it is unknown how many eligible participants received the request for participation; therefore, the rate of return is unknown. Additionally, since the demographic composition of the counselor education doctoral student population is unknown, it is unclear whether the sample of participants who chose to complete the survey is representative of the broader population. Thus, results from this analysis may not be generalizable to the overall population of counselor education doctoral students.
Recommendations for Future Research
Because the results from this study represent only the perspective of the doctoral student and not that of the dissertation chairperson, future studies might include the voice of the chairperson, allowing researchers to gain a greater level of understanding and broadening the perspective of what constitutes a satisfactory relationship between chairperson and doctoral student. Conducting a larger, more thorough qualitative study, which might include focus groups and perhaps even counselor education doctoral students who did not complete their program, also could add value to this topic. In order to construct a more robust survey, future researchers may want to allow participants an opportunity to share their own influential selection criteria or helpful chairperson behaviors, which may have been inadvertently excluded from the current list. Lastly, researchers might establish formal psychometric properties for the survey instrument.
Implications
Previous literature states that the relationship between a doctoral student and the dissertation chairperson is essential in determining the student’s successful completion and defense of his or her dissertation (Gardner, 2009; Lovitts, 2001). Findings from the current study reveal how counselor education doctoral students’ selection of their chairperson and the behaviors that the chairperson exhibits are influential in predicting students’ overall satisfaction with the student–chairperson relationship. Specifically, students who select their chairperson based on the chairperson’s work style and the students’ perceptions of their own abilities to collaborate with the chairperson appear to be more satisfied with their relationship with their chairperson than students who select their chairperson based on having a personal relationship. This knowledge can inform doctoral students and faculty members about the criteria and behaviors that contribute to good advising relationships and positive dissertation outcomes. Understanding the most influential selection criteria (similar work ethic, personality match, previous relationship) and chairperson behaviors (patience, investment in the relationship and the student, advocacy for the student, timely and effective feedback) can result in greater satisfaction in the student–chairperson relationship. This information has the potential to influence both students and faculty when making decisions about selection or behaviors that may lead to a favorable dissertation outcome.
Additionally, results from this study and future studies may provide information to programs on how to decrease doctoral student attrition. Being aware of potential behaviors displayed by faculty members in a myriad of roles throughout the program, such as chairperson, advisor, supervisor or professor, could assist in increasing doctoral students’ overall satisfaction. By utilizing the current study’s findings and understanding which selection criteria and chairperson behaviors are most likely to influence overall satisfaction, counselor educators can enhance their advising behaviors to best meet the needs of students, thereby increasing the likelihood that students will successfully defend their dissertations and graduate from the counselor education doctoral program.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of
interest or funding contributions for
the development of this manuscript.
References
Bair, C. R., & Haworth, J. G. (2004). Doctoral student attrition and persistence: A meta-synthesis of research. In J. C. Smart (Ed.), Higher education: Handbook of theory and research: Vol. 19 (pp. 481–534). Dordrecht, The Netherlands: Kluwer Academic.
Bloom, J. L., Propst Cuevas, A. E., Hall, J. W., & Evans, C. V. (2007). Graduate students’ perceptions of outstanding graduate advisor characteristics. NACADA Journal, 27(2), 28–35. doi:10.12930/0271-9517-27.2.28
Casto, C., Caldwell, C., & Salazar, C. F. (2005). Creating mentoring relationships between female faculty and students in counselor education: Guidelines for potential mentees and mentors. Journal of Counseling & Development, 83, 331–336. doi:10.1002/j.1556-6678.2005.tb00351.x
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159. doi:10.1037//0033-2909.112.1.155
Council of Graduate Schools. (n.d.-a). Ph.D. completion and attrition: Analysis of baseline demographic data from the Ph.D. completion project—Executive summary. Retrieved from http://www.phdcompletion.org/information/executive_summary_demographics_book_ii.pdf
Council of Graduate Schools. (n.d.-b). Ph.D. completion project: Policies and practices to promote student success. Retrieved from http://www.phdcompletion.org/information/executive_summary_student_success_book_iv.pdf
DeVellis, R. F. (2003). Scale development: Theory and applications (2nd ed.). Thousand Oaks, CA: Sage.
Friedman, N. (1987). Mentors and supervisors (IIE Research Report No.14). New York, NY: Institute of International Education.
Garcia, M. E., Malott, R. W., & Brethower, D. (1988). A system of thesis and dissertation supervision: Helping graduate students succeed. Teaching of Psychology, 15, 186–191. doi:10.1207/s15328023top1504_2
Gardner, S. K. (2009). Student and faculty attributions of attrition in high and low-completing doctoral programs in the United States. Higher Education, 58, 97–112. doi:10.1007/s10734-008-9184-7
Golde, C. M. (2005). The role of the department and discipline in doctoral student attrition: Lessons from four departments. The Journal of Higher Education, 76, 669–700. doi:10.1353/jhe.2005.0039
Goulden, N. R. (1991, April). Report on the perceptions of communication and relationship during the dissertation process by speech communication doctoral advisors and advisees. Association for Communication Administration Bulletin, 76, 39–48.
Jaschik, S. (2007, December 7). Hope on Ph.D. attrition rates – – except in humanities. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2007/12/07/doctoral
Kolbert, J. B., Morgan, B., & Brendel, J. M. (2002). Faculty and student perceptions of dual relationships within counselor education: A qualitative analysis. Counselor Education and Supervision, 41, 193–206. doi:10.1002/j.1556-6978.2002.tb01283.x
Kritsonis, W. A., & Marshall, R. L. (2008). Doctoral dissertation advising: Keys to improvement of completion rates. National FORUM of Educational Administration and Supervision Journal, 25(3), 74–82.
Lenz, K. S. (1997). Nontraditional-aged women and the dissertation: A case study approach. New Directions for Higher Education, 99, 65–74. doi:10.1002/he.9906
Lovitts, B. E. (2001). Leaving the ivory tower: The causes and consequences of departure from doctoral study. Lanham, MD: Rowman & Littlefield.
Protivnak, J. J., & Foss, L. L. (2009). An exploration of themes that influence the counselor education doctoral student experience. Counselor Education and Supervision, 48, 239–256. doi:10.1002/j.1556-6978.2009.tb00078.x
Smart, D. T., & Conant, J. S. (1990). Marketing dissertations: Profiling the successful thesis candidate. Journal of Marketing Education, 12(3), 2–8. doi:10.1177/027347539001200301
Wallace, D. D. (2000). Critical connections: Meaningful mentoring relationships between women doctoral students and their dissertation chairpersons (Doctoral dissertation). Available from ProQuest Dissertations and Theses Database. (UMI No. 9998714)
Zhao, C.-M., Golde, C. M., & McCormick, A. C. (2007). More than a signature: How advisor choice and advisor behaviour affect doctoral student satisfaction. Journal of Further and Higher Education, 31, 263–281. doi:10.1080/0309877070142498
Appendix A
Component Loadings for Selection Criteria Construct
|
Items
|
S/R
|
R/M
|
CS
|
O/C
|
| Has a good reputation as a researcher |
.810
|
|
|
|
| Has a good reputation as a dissertation chairperson |
.801
|
|
|
|
| Recommended by other colleagues or peers |
.733
|
|
|
|
| Higher chance of publishing my dissertation study |
.606
|
|
|
|
| Has excellent writing skills |
.586
|
|
|
|
| For a beneficial recommendation letter |
.537
|
|
|
|
| Number of chairpersons’ previous publications |
.460
|
|
|
|
| Is doing research similar to my dissertation topic |
|
.727
|
|
|
| I was approached by the faculty member |
|
.630
|
|
|
| Previously worked with this person on research projects |
|
.518
|
|
.505
|
| Has the ability to understand my methodology |
|
.490
|
|
|
| Ability to use already collected data |
|
.473
|
|
|
| We share a similar work ethic |
|
|
.743
|
|
| Matches my personality style |
|
|
.733
|
|
| Previously worked with this person as a professor |
|
|
.598
|
|
| Willing to serve as my chair |
|
|
.519
|
|
| Felt obligated to work with this person |
|
|
|
-.684
|
| Previously worked with this person in my assistantship |
|
|
|
.572
|
| Is the same race/ethnicity |
|
|
|
-.493
|
Note. S/R = success/reputation; R/M = research/methodology; CS = collaborative style; O/C = obligation/cultural.
Appendix B
Component Loadings for Behavior Construct
|
Items
|
WS |
PC |
AA |
MA |
PD |
| Spoke in “we” versus “you” statements |
.756 |
|
|
|
|
| Provided appropriate structure |
.732 |
|
|
|
|
| Held me accountable and on track |
.725 |
|
|
|
|
| Provided effective feedback on my dissertation work |
.698 |
|
|
|
|
| Discussed expectations prior to the working relationship |
.685 |
|
|
|
|
| Personable and comfortable to be around |
|
.872 |
|
|
|
| Used humor in our interactions |
|
.678 |
|
|
|
| Advocated for me with others |
|
.670 |
|
|
|
| Was patient with my progress |
|
.634 |
|
|
|
| Invested in me as a professional |
|
.609 |
|
|
|
| Unwilling to see others’ perspectives* |
|
|
.711 |
|
|
| Did not involve me in methodological decisions* |
|
|
.698 |
|
|
| Did not allow for flexibility and individuality* |
|
|
.693 |
|
|
| Did not focus on my strengths* |
|
|
.647 |
|
|
| Did my research for me* |
|
|
.582 |
|
|
| Was difficult to schedule appointments* |
|
|
|
.643 |
|
| Provided helpful edits |
.518 |
|
|
.606 |
|
| Was accountable and dependable |
.516 |
|
|
.582 |
|
| Was patient with me and the dissertation process |
|
.519 |
|
.573 |
|
| Sent me helpful research articles |
|
|
|
.521 |
|
| Helped me develop relationships in the field |
|
|
|
|
.829 |
| Assisted with career possibilities |
|
|
|
|
.694 |
| Taught me about research practices |
|
|
|
|
.620 |
Note. WS = work style; PC = personal connection; AA = academic assistance; MA = mentoring abilities; PD = professional
development
* reverse-coded items; all loadings below .5 were suppressed.
Cheryl Neale-McFall, NCC, is an Assistant Professor at West Chester University of Pennsylvania. Christine A. Ward is an independent scholar. Correspondence can be addressed to 1160 McDermott Drive, Suite 102, West Chester, PA 19383, cneale@wcupa.edu.