Maribeth F. Jorgensen, William E. Schweinle
The 68-item Research Identity Scale (RIS) was informed through qualitative exploration of research identity development in master’s-level counseling students and practitioners. Classical psychometric analyses revealed the items had strong validity and reliability and a single factor. A one-parameter Rasch analysis and item review was used to reduce the RIS to 21 items. The RIS offers counselor education programs the opportunity to promote and quantitatively assess research-related learning in counseling students.
Keywords: Research Identity Scale, research identity, research identity development, counselor education, counseling students
With increased accountability and training standards, professionals as well as professional training programs have to provide outcomes data (Gladding & Newsome, 2010). Traditionally, programs have assessed student learning through outcomes measures such as grade point averages, comprehensive exam scores, and state or national licensure exam scores. Because of the goals of various learning processes, it may be important to consider how to measure learning in different ways (e.g., change in behavior, attitude, identity) and specific to the various dimensions of professional counselor identity (e.g., researcher, advocate, supervisor, consultant). Previous research has focused on understanding how measures of research self-efficacy (Phillips & Russell, 1994) and research interest (Kahn & Scott, 1997) allow for an objective assessment of research-related learning in psychology and social work programs. The present research adds to previous literature by offering information about the development and applications of the Research Identity Scale (RIS), which may provide counseling programs with another approach to measure student learning.
Student Learning Outcomes
When deciding how to measure the outcomes of student learning, it is important that programs start with defining the student learning they want to take place (Warden & Benshoff, 2012). Student learning outcomes focus on intellectual and emotional growth in students as a result of what takes place during their training program (Hernon & Dugan, 2004). Student learning outcomes are often guided by the accreditation standards of a particular professional field. Within the field of counselor education, the Council for Accreditation of Counseling & Related Educational Programs (CACREP) is the accrediting agency. CACREP promotes quality training by defining learning standards and requiring programs to provide evidence of their effectiveness in meeting those standards. In relation to research, the 2016 CACREP standards require research to be a part of professional counselor identity development at both the entry level (e.g., master’s level) and doctoral level. The CACREP research standards emphasize the need for counselors-in-training to learn the following:
The importance of research in advancing the counseling profession, including how to critique research to inform counseling practice; identification of evidence-based counseling practices; needs assessments; development of outcome measures for counseling programs; evaluation of counseling interventions and programs; qualitative quantitative, and mixed research methods; designs in research and program evaluation; statistical methods used in conducting research and program evaluation; analysis and use of data in counseling; ethically and culturally relevant strategies for conducting, interpreting, and reporting results of research and/or program evaluation. (CACREP, 2016, p .14)
These CACREP standards not only suggest that counselor development needs to include curriculum that focuses on and integrates research, but also identify a possible need to have measurement tools that specifically assess research-related learning (growth).
Research Learning Outcomes Measures
The Self-Efficacy in Research Measure (SERM) was designed by Phillips and Russell (1994) to measure research self-efficacy, which is similar to the construct of research identity. The SERM is a 33-item scale with four subscales: practical research skills, quantitative and computer skills, research design skills, and writing skills. This scale is internally consistent (α = .96) and scores highly correlate with other components such as research training environment and research productivity. The SERM has been adapted for assessment in psychology (Kahn & Scott, 1997) and social work programs (Holden, Barker, Meenaghan, & Rosenberg, 1999).
Similarly, the Research Self-Efficacy Scale (RSES) developed by Holden and colleagues (1999) uses aspects of the SERM (Phillips & Russell, 1994), but includes only nine items to measure changes in research self-efficacy as an outcome of research curriculum in a social work program. The scale has excellent internal consistency (α = .94) and differences between pre- and post-tests were shown to be statistically significant. Investigators have noticed the value of this scale and have applied it to measure the effectiveness of research courses in social work training programs (Unrau & Beck, 2004; Unrau & Grinnell, 2005).
Unrau and Beck (2004) reported that social work students gained confidence in research when they received courses on research methodology. Students gained most from activities outside their research courses, such as participating in research with faculty members. Following up, Unrau and Grinnell (2005) administered the scale prior to the start of the semester and at the end of the semester to measure change in social work students’ confidence in doing research tasks. Overall, social work students varied greatly in their confidence before taking research courses and made gains throughout the semester. Unrau and Grinnell stressed their results demonstrate the need for the use of pre- and post-tests to better gauge the way curriculum impacts how students experience research.
Previous literature supports the use of scales such as the SERM and RSES to measure the effectiveness of research-related curricula (Holden et al., 1999; Kahn & Scott, 1997; Unrau & Beck, 2004; Unrau & Grinnell, 2005). These findings also suggest the need to continue exploring the research dimension of professional identity. It seems particularly important to measure concepts such as research self-efficacy, research interest, and research productivity, all of which are a part of research identity (Jorgensen & Duncan, 2015a, 2015b).
Research Identity as a Learning Outcome
The concept of research identity (RI) has received minimal attention (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004). Reisetter and colleagues (2004) described RI as a mental and emotional connection with research. Jorgensen and Duncan (2015a) described RI as the magnitude and quality of relationship with research; the allocation of research within a broader professional identity; and a developmental process that occurs in stages. Scholars have focused on qualitatively exploring the construct of RI, which may give guidance around how to facilitate and examine RI at the program level (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004). Also, the 2016 CACREP standards include language (e.g., knowledge of evidence-based practices, analysis and use of data in counseling) that favors curriculum that would promote RI. Although previous researchers have given the field prior knowledge of RI (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004), there has been no focus on further exploring RI in a quantitative way and in the context of being a possible measure of student learning. The first author developed the RIS with the aim of assessing RI through a quantitative lens and augmenting traditional learning outcomes measures such as grades, grade point averages, and standardized test scores. There were three purposes for the current study: (a) to develop the RIS; (b) to examine the psychometric properties of the RIS from a classical testing approach; and (c) to refine the items through future analysis based on the item response theory (Nunnally & Bernstein, 1994). Two research questions guided this study: (a) What are the psychometric properties of the RIS from a classical testing approach? and (b) What items remain after the application of an item response analysis?
The participants consisted of a convenience sample of 170 undergraduate college students at a Pacific Northwest university. Sampling undergraduate students is a common practice when initially testing scale psychometric properties and employing item response analysis (Embretson & Reise, 2000; Heppner, Wampold, Owen, Thompson, & Wang, 2016). The mean age of the sample was 23.1 years (SD = 6.16) with 49 males (29%), 118 females (69%), and 3 (2%) who did not report gender. The racial identity composition of the participants was mostly homogenous: 112 identified as White (not Hispanic); one identified as American Indian or Alaska Native; 10 identified as Asian; three identified as Black or African American; eight identified as multiracial; 21 identified as Hispanic; three identified as “other”; and seven preferred not to answer.
There were three instruments used in this study: a demographic questionnaire, the RSES, and the RIS.
Demographics questionnaire. Participants were asked to complete a demographic sheet that included five questions about age, gender, major, race, and current level of education; these identifiers did not pose risk to confidentiality of the participants. All information was stored on the Qualtrics database, which was password protected and only accessible by the primary investigator.
The RSES. The RSES was developed by Holden et al. (1999) to measure effectiveness of research education in social work training programs. The RSES has nine items that assess respondents’ level of confidence with various research activities. The items are answered on a 0–100 scale with 0 indicating cannot do at all, 50 indicating moderately certain I can do, and 100 indicating certainly can do. The internal consistency of the scale is .94 at both pre- and post-measures. Holden and colleagues reported using an effect size estimate to assess construct validity but did not report these estimates, so there should be caution when assuming this form of validity.
RIS. The initial phase of this research involved the first author developing the 68 items on the RIS (contact first author for access) based on data from her qualitative work about research identity (Jorgensen & Duncan, 2015a). The themes from her qualitative research informed the development of items on the scale (Jorgensen & Duncan, 2015a). Rowan and Wulff (2007) have suggested that using qualitative methods to inform scale development is appropriate, sufficient, and promotes high quality instrument construction.
The first step in developing the RIS items involved the first author analyzing the themes that surfaced during interviews with participants in her qualitative work. This process helped inform the items that could be used to quantitatively measure RI. For example, one theme was Internal Facilitators. Jorgensen and Duncan (2015a) reported that, “participants explained the code of internal facilitators as self-motivation, time management, research self-efficacy, innate traits and thinking styles, interest, curiosity, enjoyment in the research process, willingness to take risks, being open-minded, and future goals” (p. 24). An example of scale items that were operationalized from the theme Internal Facilitators included: 1) I am internally motivated to be involved with research on some level; 2) I am willing to take risks around research; 3) Research will help me meet future goals; and 4) I am a reflective thinker. The first author used that same process when operationalizing each of the qualitative themes into items on the RIS. There were eight themes of RI development (Jorgensen & Duncan, 2015a). Overall, the number of items per theme was proportionate to the strength of theme, as determined by how often it was coded in the qualitative data. After the scale was developed, the second author reviewed the scale items and cross-checked items with the themes and subthemes from the qualitative studies to evaluate face validity (Nunnally & Bernstein, 1994).
The items on the RIS are short with easily understandable terms in order to avoid misunderstanding and reduce perceived cost of responding (Dillman, Smyth, & Christian, 2009). According to the Flesch Reading Ease calculator, the reading level of the scale is 7th grade (Readability Test Tool, n.d.). The format of answers to each item is forced choice. According to Dillman et al. (2009), a forced-choice format “lets the respondent focus memory and cognitive processing efforts on one option at a time” (p. 130). Individuals completing the scale are asked to read each question or phrase and respond either yes or no. To score the scale, a yes would be scored as one and a no would be scored as zero. Eighteen items are reverse-scored (item numbers 11, 23, 28, 32, 39, 41, 42, 43, 45, 48, 51, 53, 54, 58, 59, 60, 61, 62), meaning that with those 18 questions an answer of no would be scored as a one and an answer of yes would be scored as a zero. Using a classical scoring method (Heppner et al., 2016), scores for the RIS are determined by adding up the number of positive responses. Higher scores indicate a stronger RI overall.
Upon Institutional Review Board approval, the study instruments were uploaded onto the primary investigator’s Qualtrics account. At that time, information about the study was uploaded onto the university psychology department’s human subject research system (SONA Systems). Once registered on the SONA system, participants were linked to the instruments used for this study through Qualtrics. All participants were asked to read an informational page that briefly described the nature and purpose of the study, and were told that by continuing they were agreeing to participate in the study and could discontinue at any time. Participants consented by selecting “continue” and completed the questionnaire and instruments. After completion, participants were directed to a post-study information page on which they were thanked and provided contact information about the study and the opportunity to schedule a meeting to discuss research findings at the conclusion of the study. No identifying information was gathered from participants. All information was stored on the Qualtrics database.
All analyses were conducted in SAS 9.4 (SAS Institute, 2012). The researchers first used classical methods (e.g., KR20 and principal factor analysis) to examine the psychometric properties of the RIS. Based on the results of the factor analysis, the researchers used results from a one-parameter Rasch analysis to reduce the number of items on the RIS.
Homogeneity was explored by computing Kuder-Richardson 20 (KR20) alphas. Across all 68 items the internal consistency was strong (.92). Concurrent validity (i.e., construct validity) was examined by looking at correlations between the RIS and the RSES. The overall correlation between the RIS and the RSES was .66 (p < .001).
Item Response Analysis
Item response theory brought about a new perspective on scale development (Embretson & Reise, 2000) in that it promoted scale refinement even at the initial stages of testing. Item response theory allows for shorter tests that can actually be more reliable when items are well-composed (Embretson & Reise, 2000). The RIS initially included 68 items. Through Rasch analyses, the scale was reduced to 21 items (items numbered 3, 4, 9, 10, 12, 13, 16, 18, 19, 24, 26, 34, 39, 41, 42, 43, 44, 46, 47, 49, 61).
The final 21 items were selected for their dispersion across location on theta in order to widely capture the constructs. The polychoric correlation matrix for the 21 items was then subjected to a principal components analysis yielding an initial eigenvalue of 11.72. The next eigenvalue was 1.97, which clearly identified the crook of the elbow. Further, Cronbach’s alpha for these 21 items was .90. Taken together, these results suggest that the 21-item RIS measures a single factor.
This conclusion was further tested by fitting the items to a two-parameter Rasch model (AIC = 3183.1). Slopes were constrained to unity (1.95), and item location estimates are presented in Table 1. Bayesian a posteriori scores also were estimated and strongly correlated with classical scores (i.e., tallies of the number of positive responses [r = .95, p < .0001]).
This scale represents a move from subjective to a more objective assessment of RI. In the future, the scale may be used with other student and non-student populations to better establish its psychometric properties, generalizability, and refinement. Although this study sampled undergraduate students, this scale may be well-suited to use with counseling graduate students and practitioners because items were developed based on a qualitative study with master’s-level counseling students and practicing counselors (Jorgensen & Duncan, 2015a).
Additionally, this scale offers another method for assessing student learning and changes that take place for both students and professionals. As indicated by Holden et al. (1999), it is important to assess learning in multiple ways. Traditional methods may have focused on measuring outcomes that reflect a performance-based, rather than a mastery-based, learning orientation. Performance-based learning has been defined as wanting to learn in order to receive external validation such as a grade (Bruning, Schraw, Norby, & Ronning, 2004). Mastery learning has been defined as wanting to learn for personal benefit and with the goal of applying information to reach a more developed personal and professional identity (Bruning et al., 2004).
Based on what is known about mastery learning (Bruning et al., 2004), students with this type of learning orientation experience identity changes that may be best captured through assessing changes in thoughts, attitudes, and beliefs. The RIS was designed to measure constructs that capture internal changes that may be reflective of a mastery learning orientation. A learner who is performance-oriented may earn an A in a research course but show a lower score on the RIS. The opposite also may be true in that a learner may earn a C in a research course but show higher scores on the RIS. Through the process of combining traditional assessment methods such as grades with the RIS, programs may get a more comprehensive understanding of the effectiveness and impact of their research-related curriculum.
Item location estimates.
Limitations and Areas for Future Research
The sample size and composition were sufficient for the purposes of the initial development and classical testing and item response analysis (Heppner et al., 2016); however, these authors still suggest caution when applying the results of this study to other populations. Endorsements of the participants may not reflect answers of the population in other areas of the country or different academic levels. Future research should sample other student and professional groups. This will help to further establish the psychometric properties and item response analysis conclusions and make the RIS more appropriate for use in other fields. Additionally, future research may examine how scores on the RIS correlate with traditional measures of learning (e.g., grades in individual research courses, collapsed grades in all research courses, research portion on counselor licensure exams).
As counselors-in-training and professional counselors are increasingly being required to demonstrate they are using evidence-based practices and measuring the effectiveness of their services, they may benefit from assessments of their RI (American Counseling Association, 2014; Gladding & Newsome, 2010). CACREP (2016) has responded to increased accountability by enhancing their research and evaluation standards for both master’s- and doctoral-level counseling students. The American Counseling Association is further supporting discussions about RI by publishing a recent blog post titled “Research Identity Crisis” (Hennigan Paone, 2017). In the post, Hennigan Paone described a hope for master’s-level clinicians to start acknowledging and appreciating that research helps them work with clients in ways that are informed by “science rather than intuition” (para. 5). As the calling becomes stronger for counselors to become more connected to research, it seems imperative that counseling programs assess their effectiveness in bridging the gap between research and practice. The RIS provides counseling programs an option to do exactly that by evaluating the way students are learning and growing in relation to research. Further, the use of this type of outcome measure could provide for good modeling at the program level; in that, the hope would be that it would encourage counselors-in-training to develop both a curiosity and motivation to infuse research practices (e.g., needs assessments, outcome measures, data analysis) into their clinical work.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest or funding contribu tions for the developmentof this manuscript.
American Counseling Association. (2014). 2014 ACA code of ethics. Alexandria, VA: Author.
Bruning, R. H., Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction (4th ed.). Upper Saddle River, NY: Pearson Merrill/Prentice Hall.
Council for Accreditation of Counseling & Related Educational Programs. (2016). 2016 CACREP standards. Retrieved from http://www.cacrep.org/wp-content/uploads/2017/07/2016-Standards-with-Glossary-7.2017.pdf
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc.
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum.
Gladding, S. T., & Newsome, D. W. (2010). Clinical mental health counseling in community and agency settings (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
Hennigan Paone, C. (2017, December 15). Research identity crisis? [Blog post]. Retrieved from https://www.counseling.org/news/aca-blogs/aca-member-blogs/aca-member-blogs/2017/12/15/research-identity-crisis
Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., & Wang, K. T. (2015). Research design in counseling (4th ed.). Boston, MA: Cengage Learning.
Hernon, P. & Dugan, R. E. (2004). Four perspectives on assessment and evaluation. In P. Hernon & R. E. Dugan (Eds.), Outcome assessment in higher education: Views and perspectives (pp. 219–233). Westport, CT: Libraries Unlimited.
Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational outcomes assessment. Journal of Social Work Education, 35, 463–476.
Jorgensen, M. F., & Duncan, K. (2015a). A grounded theory of master’s-level counselor research identity. Counselor Education and Supervision, 54, 17–31. doi:10.1002/j.1556-6978.2015.00067
Jorgensen, M. F., & Duncan, K. (2015b). A phenomenological investigation of master’s-level counselor research identity development stages. The Professional Counselor, 5, 327–340. doi:10.15241/mfj.5.3.327
Kahn, J. H., & Scott, N. A. (1997). Predictors of research productivity and science-related career goals among
counseling psychology doctoral students. The Counseling Psychologist, 25, 38–67. doi:10.1177/0011000097251005
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
Phillips, J. C., & Russell, R. K. (1994). Research self-efficacy, the research training environment, and research productivity among graduate students in counseling psychology. The Counseling Psychologist, 22, 628–641. doi:10.1177/0011000094224008
Readability Test Tool. (n.d.). Retrieved from https://www.webpagefx.com/tools/read-able/
Reisetter, M., Korcuska, J. S., Yexley, M., Bonds, D., Nikels, H., & McHenry, W. (2004). Counselor educators and qualitative research: Affirming a research identity. Counselor Education and Supervision, 44, 2–16. doi:10.1002/j.1556-6978.2004.tb01856.x
Rowan, N., & Wulff, D. (2007). Using qualitative methods to inform scale development. The Qualitative Report, 12, 450–466.
SAS Institute [Statistical software]. (2012). Retrieved from https://www.sas.com/en_us/home.html
Unrau, Y. A., & Beck, A. R. (2004). Increasing research self-efficacy among students in professional academic programs. Innovative Higher Education, 28(3), 187–204.
Unrau, Y. A., & Grinnell,, R. M., Jr. (2005). The impact of social work research courses on research self-efficacy for social work students. Social Work Education, 24, 639–651. doi:10.1080/02615470500185069
Warden, S., & Benshoff, J. M. (2012). Testing the engagement theory of program quality in CACREP-accredited counselor education programs. Counselor Education & Supervision, 51, 127–140.
Maribeth F. Jorgensen, NCC, is an assistant professor at the University of South Dakota. William E. Schweinle is an associate professor at the University of South Dakota. Correspondence can be addressed to Maribeth Jorgensen, 414 East Clark Street, Vermillion, SD 57069, firstname.lastname@example.org.
Eleni M. Honderich, Jessica Lloyd-Hazlett
A purposeful sample of 359 graduate counseling students completed a survey assessing factors influencing program enrollment decisions with particular attention to students’ awareness of and importance ascribed to accreditation from the Council for Accreditation of Counseling and Related Educational Programs (CACREP) prior to and following enrollment. Results indicated that accreditation was the second most influential factor in one half of the students’ enrollment decisions; nearly half of participants were unaware of CACREP accreditation prior to enrollment. Accreditation was a top factor that students attending non-CACREP-accredited programs wished they had considered more in their enrollment decisions. Findings from the survey indicate that prospective counseling students often lack necessary information regarding accreditation that may influence enrollment decisions. Implications for counseling students and their graduate preparation programs, CACREP and the broader counseling profession are discussed.
Keywords: CACREP, accreditation, counseling students, enrollment decisions, graduate preparation programs
The Council for Accreditation of Counseling and Related Educational Programs (CACREP) provides specialized accreditation for counselor education programs. Within higher education, accreditation is a “quality assurance and enhancement mechanism” premised on self-regulation through intensive self-study and external program review (Urofsky, 2013, p. 6). Accreditation has been reported to be particularly relevant to prospective counseling students, given increases in both the number of programs seeking CACREP accreditation (Ritchie & Bobby, 2011) and implications of program accreditation status for students’ postgraduation opportunities. Research to date has not surveyed counseling students about their knowledge of CACREP accreditation prior to or following enrollment in graduate-level counseling programs.
Graduate Program Enrollment Decisions
For prospective counseling students, selecting an appropriate counselor preparation program for graduate-level study is an exceedingly complex task. Prospective students must choose from a myriad of options across mental health fields, areas of specialization and program delivery formats (i.e., traditional, virtual and hybrid classrooms). Those prospective students who are unfamiliar with CACREP accreditation and potential implications of program accreditation status for postgraduation opportunities may not sufficiently consider accreditation a relevant criterion during selection of a graduate-level counselor education program.
To date, the majority of higher education enrollment research has focused on undergraduate students. Hossler and Gallager (1987) outlined a three-stage college selection model that integrates econometric, sociologic and information-processing concerns of prospective enrollees. The first stage, predisposition, culminates with a decision to attend college or not. Past student achievement, ability and level of educational aspiration, along with parental income, education and encouragement, are important influences at this stage. The second stage, search, includes gathering information about prospective institutions, submitting applications and receiving admission decision(s). Finally, choice, describes the selection of a college or university. Factors influencing enrollment decisions include a variety of personal and institutional characteristics including socioeconomic status, financial costs and aid, academic qualities, location, and recruitment correspondence (Hossler & Gallager, 1987).
Academic reputation, job prospects for graduates, campus visits, campus size and financial aid offerings have been identified as critical factors influencing undergraduate student enrollment decisions (Hilston, 2006). Research also has underscored the weight of parental opinions in shaping undergraduate student enrollment decisions. More limited research has examined factors influencing graduate student enrollment decisions, but appears necessary given differences across contexts of individuals making undergraduate versus graduate-level enrollment decisions.
Within a non-field-specific survey of 2,834 admitted graduate students, Kallio (1995) found the following factors to be most influential in participants’ program selection and enrollment decisions: (a) residency status, (b) quality and other academic environment characteristics, (c) work-related concerns, (d) spouse considerations, (e) financial aid, and (f) campus social environment. A more recent examination of doctoral-level students within higher education administration programs (Poock & Love, 2001) indicated similar influential factors with location, flexibility of accommodations for work–school–life balance, reputation and friendliness of faculty of highest importance. Flexibility of program requirements and delivery format also were indicated. Ivy and Naude (2004) surveyed 507 MBA students and identified a seven-factor model of variables influencing graduate student enrollment decisions. The seven factors were the following: program, prominence, price, prospectus, people, promotion and premium. Students indicated elements of the program, including range of electives and choice of majors; prominence, including staff reputation and program ratings; and price, including tuition fees and payment flexibility, as the most salient factors.
Accreditation and Graduate Program Enrollment Decisions
In a review of the status of accreditation within higher education, Bardo (2009) delineated major trends with implications for both current and prospective students. First, across higher education fields, there is heightened emphasis on accountability through documented student learning outcomes that transcend individual course grades. Second, there are calls for greater transparency around accreditation procedures and statuses. Parallel attention also is given to ethical obligations of institutions and accrediting bodies to provide clearer information to students, not only about the requirements of enrollment in accredited institutions, but also about the significance of accreditation to postgraduation outcomes (Bardo, 2009).
Accreditation is a critical institutional factor that appears to have both a direct and an indirect impact on graduate program enrollment decisions. Most directly, accreditation may be a specific selection criterion used by prospective students when exploring programs for application or when making an enrollment decision among multiple offers. Indirectly, the accreditation status of an institution likely influences each of the seven p’s identified by Ivy and Naude (2004) as informing graduate student enrollment decisions. For example, accreditation may dictate minimum credit requirements, required coursework, program delivery methods and acceptable faculty-to-student ratios. Thus, the need emerges to examine factors informing counseling students’ decisions regarding enrollment in graduate-level programs, with specific attention to students’ levels of awareness and importance ascribed to CACREP accreditation. To contextualize the current study, a brief history of CACREP and perceived benefits and challenges of accreditation are provided.
CACREP held its first board meeting in 1981 and was founded in part as a response to the development of accreditation standards in other helping professions, such as the American Psychological Association, the National Council for Accreditation of Teacher Education and the Council on Rehabilitation Education. In its history of over 30 years, a primary goal of CACREP has been to assist in the development and growth of the counseling profession by promoting and administrating a quality assurance process for graduate programs in the field of counseling (Urofsky, Bobby, & Ritchie, 2013). Currently, just over 63% of programs falling under CACREP’s jurisdiction hold this accreditation; specifically, by the end of 2013, CACREP had accredited 634 programs at 279 institutions within the United States (CACREP, 2014). In the 2012–2013 school year alone, CACREP-accredited programs enrolled 39,502 students and graduated 11,099 students (CACREP, 2014).
As described by Urofsky and colleagues (2013), some revisions to the CACREP standards represent intentional efforts toward growth, self-sufficiency and effectiveness. Such modifications reflected in the 2009 CACREP standards include greater emphases on unified counselor professional identity through specifications for core faculty members and increased focus on documented student learning outcomes in response to larger trends of accountability in higher education. In contrast to these CACREP-directed modifications, Urofsky and colleagues (2013) highlighted that some historical revisions to CACREP standards have been influenced by the larger context of the counseling field. Pertinent contextual issues include licensure portability and recognition from larger federal agencies, including the U.S. Department of Veteran Affairs, Department of Defense and TRICARE, a government-funded insurance company for military personnel. Following the passing of House Bill 232 (License as a Professional Counselor, 2014), Ohio became the first state to require graduation from a CACREP-accredited program (clinical mental health, rehabilitation or addictions counseling) for licensure beginning in 2018. More than 50% of states accept graduation from a CACREP-accredited program as one path for meeting licensure educational requirements (CACREP, 2013). Further, while not directly advocated for by CACREP, graduation from a CACREP-accredited program is required for counselors seeking employment consideration in the Department of Veteran Affairs and the Department of Defense, and for TRICARE reimbursement (TRICARE, 2014).
Perceived Benefits of CACREP Accreditation
Specific benefits of CACREP accreditation have been identified in the literature at both the individual student and institutional levels, which may inform prospective students’ decisions regarding enrollment in graduate-level counseling programs. Perceived benefits of CACREP accreditation identified by entry-level counseling students include increased internship and job opportunities, improved student quality, increased faculty professional involvement and publishing, and increased acceptance into doctoral-level programs in counselor education and supervision (Mascari & Webber, 2013). Doctoral students are assured training that will qualify them to serve as identified core faculty members in CACREP-accredited counseling programs (CACREP, 2009).
Counseling students’ graduate program enrollment decisions also might be influenced by differential benefits afforded to graduates of CACREP-accredited programs who are pursuing professional licensure. Though licensure requirements vary from state to state, a growing number of states place heavier emphasis on the applicant’s receipt of a counseling degree from an accredited program (CACREP, 2013). Some states associate “graduation from a CACREP-accredited program as evidence of meeting most or all of the educational requirements for licensure eligibility” (Ritchie & Bobby, 2011. p. 52). Licensure applicants graduating from non-CACREP-accredited programs may need to provide supplemental documentation to substantiate their training program’s adherence to licensing criteria. In some instances, applicants graduating from non-CACREP-accredited programs may need additional coursework to meet criteria for licensure, which incurs additional costs and delays application processes.
Graduate programs’ CACREP accreditation status might impact counseling students’ enrollment decisions relative to postgraduation insurance reimbursement and qualification for certain job placements (TRICARE, 2014). Specifically, following intensive professional advocacy initiatives, TRICARE began recognizing and reimbursing counseling professionals as mental health service providers without the need for physician referral. However, as of now, counselors graduating from non-CACREP-accredited training programs after January 1, 2015 will be unable to receive approval to practice independently within the TRICARE system. Considering the estimated 9.5 million people insured by TRICARE (TRICARE, 2014), this contingency may present serious implications for counseling professionals who have graduated or will graduate from non-CACREP-accredited training programs. Johnson, Epp, Culp, Williams, and McAllister (2013) noted that thousands of both currently licensed mental health professionals and counseling students will be affected as they “cannot and will not ever be able to join the TRICARE network” (p. 64).
Existing literature also highlights benefits of CACREP accreditation at the program and institutional levels, which may impact counseling students’ graduate program enrollment decisions. Achievement and maintenance of CACREP accreditation entails exhaustive processes of self-study and external peer review. Self- and peer-review processes contribute to shared quality standards among accredited counselor preparation programs and demonstrated student learning outcomes based on standards established by the profession itself (Mascari & Webber, 2013). Faculty members employed by CACREP-accredited counselor education programs also appear to differentially interface with the counseling profession. Specifically, a statistically significant relationship has been found between CACREP accreditation and professionalism for school counselor educators, as reflected by contributions to the profession (i.e., journal publications and conference presentations), leadership in professional organizations and pursuit of counseling credentials (Milsom & Akos, 2005).
Perceived Challenges of CACREP Accreditation
In addition to highlighting potential benefits of CACREP accreditation, extant literature delineates potential challenges associated with CACREP accreditation, which may directly or indirectly impact counseling students’ graduate program enrollment decisions. Primary among identified challenges are time and financial resources related to the attainment and maintenance of CACREP accreditation (Paradise et al., 2011). Financial requirements associated with CACREP accreditation include application expenses and annual fees, the costs of hiring faculty to meet core faculty requirements and student-to-faculty ratios, and labor costs associated with compiling self-studies.
Considering that the 2009 CACREP standards identify 165 core standards and approximately 60 standards per specialty area (Urofsky, 2013), attaining accreditation can be a cumbersome process. Curricular attention given to each standard can vary widely across programs. In response to significant and longstanding calls for increased accountability in higher education, CACREP-accredited programs are required to identify and provide evidence of student learning outcomes (Barrio Minton & Gibson, 2012). To address this requirement, it may be necessary for some programs to reorganize curricular elements, as well as to integrate assessment software and procedures to support this data collection within their programs.
An additional challenge of CACREP accreditation surrounds perceived limitations placed on program flexibility and innovation. Paradise and colleagues (2011) found that of the counseling program coordinators they interviewed (N = 135), 49% believed that the 2009 CACREP standards “would require all programs to be ‘essentially the same” (p. 50). Among changes ushered in by the 2009 CACREP standards, education and training requirements of core faculty and the designated student-to-faculty ratios have received critical attention (Paradise et al., 2011). Clinical experience beyond the requirements of graduate-level internship is not specifically considered within requisites for identified core faculty members (CACREP, 2009, I.W.). While adopted largely to foster counselors’-in-training internalization of a clear counselor professional identity (Davis & Gressard, 2011), these standard requirements may influence program hiring decisions and curriculum content and sequencing (CACREP, 2009; Paradise et al., 2011).
Over CACREP’s history of more than 30 years, the landscape of the accrediting body, as well as the larger counseling profession it serves, has dramatically shifted. Bobby (2013) called for greater research examining the effects of CACREP accreditation on programs and student knowledge, skill development and graduate performance. A specific gap exists in the literature related to factors influencing counseling students’ graduate program enrollment decisions, including the potential relevance of students’ knowledge of CACREP prior to and following enrollment. Research in this area not only would illuminate counseling students’ propensities for making informed choices as consumers of higher education, but might also reveal critical implications for and ethical obligations of students, programs and CACREP itself within contemporary and complex accreditation climates. Consequently, the current study examined the following research questions: (a) What factors influence students’ decisions regarding enrollment in graduate-level counseling programs? (b) How aware are students of CACREP accreditation prior to and following program enrollment? (c) How important is CACREP accreditation to students prior to and following program enrollment? (d) Is there a difference in CACREP accreditation awareness between students in CACREP- and non-CACREP-accredited programs prior to program enrollment? (e) Does students’ awareness of CACREP-accreditation increase after program enrollment?
In total, 40 graduate-level counseling programs were contacted to participate in this study. A purposeful sample was chosen, seeking participation from four CACREP-accredited and four non-CACREP-accredited programs from each of the five geographic regions within the United States (i.e., Western, Southern, North Atlantic, North Central, Rocky Mountain). For each geographic region, CACREP-accredited and non-CACREP-accredited programs were selected based on the criteria of student body size and status as a public versus private institution. Specifically, within each of the five geographic regions, four institutions (one small [n < 10,000], one large [n > 10,000], one private, one public) were purposefully selected for each accreditation status (CACREP, non-CACREP). Selection criteria did not include cognate focus; however, participants included students within clinical mental health; school; marriage, couple and family; counselor education and supervision; and addictions counseling programs.
A request for participation was made to the counseling department chairs of the 40 purposefully selected programs via e-mail. In total, representatives from 25 of the 40 contacted programs (62.5%) agreed that their programs would participate in this study. The participation rate of CACREP-accredited programs was higher than that of non-CACREP-accredited programs; the overall participants included 15 of the 20 contacted CACREP-accredited programs (75%) and 10 of the 20 contacted non-CACREP-accredited programs (50%). At the institutional level, counseling program participation across the five regions was representative of national program distribution. Following attainment of consent from the counseling department chairs, an electronic survey was provided to each of the 25 participating programs for direct dissemination to students meeting the selection criteria.
A total of 359 master’s and doctoral students currently enrolled in counseling programs nationwide responded to the survey. The exact response rate at the individual student level is unknown, as the number of students receiving the survey at each participating institution was not collected. Of the 359 participants surveyed, 22 surveys were deemed unusable (e.g., sampling parameter not met, blank survey response) and were not included in analyses. Of the remaining 337 participants, missing data were addressed by providing sample sizes contingent on the specific research question.
Participants’ ages (n = 332) ranged from 20–63, with a median age of 28. Gender within the sample (n = 335) consisted of 14.3% male, 85.1% female and 0.3% transgender; the remaining 0.3% of participants preferred not to answer. In regards to race/ethnicity (n = 334), 84.1% of the sample identified as Caucasian, 7.2% as African-American, 2.7% as Latino/a, 1.8% as Asian, 1.5% as biracial, 0.3% as Pacific Islander and 0.3% as Hawaiian; the remaining 2.1% preferred not to answer. The reported educational levels (n = 331) included 90.4% of participants in a master’s program and 9% in a doctoral program; the remaining 0.9% participants were postdoctoral and postgraduate students taking additional coursework. Participants reported enrollment in the following cognate areas (n = 331): mental health and community counseling (48.8%), school counseling (27.7%), marriage and family counseling (5.4%), counselor education and supervision (5.1%), other (4.0%), rehabilitation counseling (3.0%), addictions counseling (2.1%), multitrack (1.8%), assessment (1.2%), and career counseling (0.9%).
In order to obtain program demographic information based on the aforementioned purposeful sampling design, participants were asked to identify the university attended. However, as 15.5% of participants provided an unusable response (e.g., preferred not to answer), self-reported program descriptive demographic data were analyzed instead. Participants classified their institution as public or private (n = 332) as follows: 68.7% reported attending a public university and 31.3% a private university. Student population of the university also was self-reported (n = 326) as follows: 38.7% of the participants attended universities with a student population of fewer than 10,000, 23.3% with a student population of 10,000–15,000 and 38% with a student population of over 15,000. The program accreditation status per participants’ self-report (n = 307) indicated that 56.7% were enrolled in CACREP-accredited programs, 34.9% were enrolled in non-CACREP-accredited programs and 8.5% were uncertain about program accreditation status.
The researchers implemented Qualtrics to house and distribute the electronic survey. Survey items included participant and counseling program demographics, factors influencing decisions on enrollment in graduate-level counseling programs, awareness of CACREP accreditation prior to and following enrollment, and importance ascribed to CACREP accreditation prior to and following enrollment. Relative to factors influencing decisions on enrollment in graduate-level counseling programs, participants first were asked to list the top three factors influencing their enrollment decision. Participants then were asked to select the most important factor among their top three. Additionally, participants responded to the following question: “When choosing your graduate program, is there a factor you now wish had been more influential in your decision?” Questions pertaining to participants’ awareness of and ascribed importance to CACREP accreditation included the following: (a) “When first applying to graduate school, how familiar were you with CACREP accreditation?” (b) “When first applying to graduate school, how important was CACREP accreditation for you?” (c) “Currently, how familiar are you with CACREP accreditation?” (d) “Currently, how important is CACREP accreditation for you?” Participants used a four-point Likert scale for their responses, which ranged from “very familiar/very important” to “not familiar/not important.” The category of “I was/am not aware of accreditation” also was provided where appropriate.
Research question one examined the top factors participants considered and wished they had considered more when making a counseling program enrollment decision (n = 328). As shown in Table 1, results indicated the following rank order for the top 10 factors that influenced participants’ enrollment decisions: (a) location at 33.6%, (b) program accreditation at 14.0%, (c) funding/scholarships at 12.2%, (d) program prestige at 8.6%, (e) faculty at 7.7%, (f) program/course philosophy at 4.2%, (g) program acceptance at 3.9%, (h) faith at 3.9%, (i) schedule/flexibility at 3.6% and (j) research interests at 2.4%. The top 10 factors that participants wished they had considered more when making their enrollment decisions included the following: (a) “none” at 42.3%, (b) funding/scholarships at 15.2%, (c) program accreditation at 12.8%, (d) faculty at 6.8%, (e) research interests at 5.1%, (f) program prestige at 4.5%, (g) networking opportunities at 3.6%, (h) location at 2.4%, (i) schedule/flexibility at 1.5% and (j) personal career goals at 1.2%. Further analysis indicated the following three factors that participants at non-CACREP-accredited programs (n = 106) wished they had considered more when making an enrollment decision: (a) program accreditation at 31.8%, (b) “none” at 30.8% and (c) funding/scholarships at 9.3%.
Counseling Students’ Enrollment Decision Factors
Factors Participants Considered
Factors Participants Wished They Had Considered More
Factor ranked order
% of n
Factor ranked order
% of n
|Note. n = 328
Research question two explored participants’ awareness of CACREP accreditation prior to (n = 308) and following enrollment (n = 309) in graduate-level counseling programs. Before enrollment, only one quarter (24.7%) of the sample indicated being “familiar” (n = 49) or “very familiar” (n = 27) with CACREP accreditation. The remaining 75.3% of the sample reported less awareness of CACREP accreditation prior to enrollment, with these participants reporting only being “somewhat familiar” (n = 93) or “not familiar” (n = 139) with CACREP accreditation. In contrast, following enrollment in graduate-level counseling programs, nearly three quarters (73.1%) of the sample noted either being “familiar” (n = 124) or “very familiar” (n = 102) with CACREP accreditation. The remaining 26.9% of participants reported being “somewhat familiar” (n = 66) or “not familiar” (n = 17). Overall, the percentage of all students reporting that they were either “familiar” or “very familiar” with CACREP accreditation increased by 48.4% following enrollment in graduate-level counseling programs.
Consideration was given to potential differences in familiarity with CACREP accreditation among (a) doctoral- and master’s-level students and (b) students attending CACREP- and non-CACREP programs. For those students enrolled in a master’s-level program (n = 276), regardless of program accreditation status, 21% reported being either “familiar” or “very familiar” with CACREP accreditation pre-enrollment. For doctoral-level students (n = 27), 63% indicated familiarity with CACREP accreditation prior to enrolling in a graduate program. These results indicated that doctoral-level students appeared to show more awareness of CACREP accreditation pre-enrollment, as a 42% difference in familiarity level existed. Post-enrollment, familiarity levels increased for both groups, as evidenced by 72.8% of master’s-level students (n = 201) and 81.5% of doctoral-level students (n = 22) reporting either being “familiar” or “very familiar” with CACREP accreditation. The difference between the two groups was now 8.7%, with doctoral students exhibiting more familiarity with CACREP post-enrollment.
Students’ familiarity with CACREP prior to and following enrollment also were considered between students in accredited (n = 173) and non-CACREP-accredited (n = 107) programs, as well as among students who reported being unsure of their program’s accreditation status (n = 26). Prior to enrollment, the following percentages of students reported being either “familiar” or “very familiar” with CACREP accreditation: 31.8% in CACREP-accredited programs, 18.7% in non-CACREP-accredited programs and 0.0% among those unaware of program accreditation status. Post-enrollment, 78.2% of students in a CACREP-accredited program, 77.4% of students in a non-CACREP-accredited program and 23.1% of those unaware of their program’s accreditation status reported being either “familiar” or “very familiar” with CACREP accreditation. Overall, the results indicated that higher percentage levels of CACREP familiarity existed both pre-enrollment and post-enrollment for students in CACREP-accredited programs when compared to students in either non-CACREP programs or who were unaware of their program’s accreditation status.
Research question three explored the level of importance participants placed on CACREP accreditation prior to (n = 309) and following enrollment (n = 308) in graduate-level counseling programs. Before enrollment, 39.5% of the sample noted that CACREP accreditation was either “important” (n = 50) or “very important” (n = 73). The remaining 60.5% of participants reported the following levels of importance ascribed to CACREP accreditation prior to enrollment: “somewhat important” (n = 51) or “not important” (n = 34), or indicated they were “not aware” (n = 102) of accreditation. After enrollment, participants’ levels of importance ascribed to CACREP accreditation increased, with 79.6% of the sample describing CACREP accreditation as “important” (n = 80) or “very important” (n = 165). Approximately one fifth (20.4%) of the sample reported low levels of importance ascribed to CACREP post-enrollment, rating CACREP accreditation as “somewhat important” (n = 33) or “not important” (n = 22), or indicated they were “not aware” (n = 8) of accreditation. From pre-enrollment to post-enrollment, the percentage of students identifying CACREP as “important” or “very important” increased by 40.1%.
Potential differences in the results as a function of program accreditation status also were examined. The following percentages of students believed CACREP accreditation was either “important” or “very important” prior to graduate school enrollment: 58% if the program was reported to be accredited (n = 101), 17.8% if not CACREP accredited (n = 19), and 3.8% if the participant was unsure of the program’s accreditation status (n = 1). Post-enrollment, ascribed levels of importance increased for all students regardless of program accreditation status, as follows: 89.7% of students in CACREP-accredited programs (n = 156), 72.6% of students in non-CACREP-accredited programs (n = 77) and 38.5% of students unaware of their program’s accreditation status (n = 10) indicated that CACREP accreditation was either “important” or “very important” to them.
Research question four explored potential differences in levels of awareness of CACREP accreditation prior to enrollment in graduate-level counseling programs between participants in CACREP-accredited programs, those in non-CACREP-accredited programs and those unaware of program accreditation status. Descriptive results indicated that a difference existed between CACREP accreditation awareness levels prior to enrollment contingent on self-reported program accreditation status; to determine whether a significant statistical difference existed, a one-way ANOVA was used. The omnibus F statistic was interpreted, which is robust even when sample sizes within the different levels are small or unequal (Norman, 2010). The results indicated that self-reported CACREP accreditation statuses (i.e., accredited, non-accredited, unaware of accreditation status) were found to have a significant effect on participants’ awareness of CACREP accreditation prior to enrollment into a graduate-level counseling program, F(2,303) = 15.378, MSE = 0.861, p < 0.001. The Levine’s test was significant, indicating nonhomogeneity of variance. To account for the unequal variance, post hoc analyses using Tamhane’s T2 criterion for significance were run to determine between which accreditation levels the significant difference in the mean scores existed. The post hoc analyses indicated that prior to graduate school enrollment, participants who self-reported attendance in accredited programs were significantly more aware of CACREP accreditation (n = 173, M = 2.88, SD = 0.976) than the following: (a) participants who self-reported attending non-accredited programs (n = 107, M = 3.36, SD = 0.934; p < 0.001) and (b) participants who reported uncertainty of their program’s current accreditation status (n = 26, M = 3.77, SD = 0.430; p < 0.001). Additionally, the analysis indicated that participants who self-reported enrollment in non-CACREP-accredited programs were significantly more aware of CACREP accreditation compared to participants who were uncertain of their program’s current accreditation status, p = 0.004. Overall, the results for research question four suggested the following information regarding awareness of CACREP accreditation prior to enrollment for all students: (a) those enrolled in CACREP-accredited programs indicated the most awareness, (b) those enrolled in non-CACREP-accredited programs exhibited the second most awareness and (c) those unaware of their program’s accreditation status reported the least awareness.
The omnibus F test for research question four was re-run, looking at only students currently enrolled in a master’s-level program, teasing out potential outlier effects produced by doctoral students’ knowledge base; descriptive statistics had indicated that doctoral-level students exhibited more awareness of CACREP accreditation prior to enrollment. When examining only master’s-level students (n = 274), the results indicated that self-reported CACREP accreditation statuses (i.e., accredited, non-accredited, unaware of accreditation status) were found to have a significant effect on these students’ awareness of CACREP accreditation prior to enrollment in a graduate-level counseling program, F(2,274) = 14.470, MSE = 0.724, p < 0.001. Tamhane’s T2 post hoc analyses suggested similar results for master’s-level students’ CACREP awareness contingent on the program’s accreditation status when compared to results found for all participants (i.e., both master’s- and doctoral-level students). For master’s-level students, the following results were found: (a) those enrolled in CACREP-accredited programs indicated the most awareness, (b) those enrolled in non-CACREP-accredited programs exhibited the second most awareness and (c) those unaware of their program’s accreditation status reported the least awareness.
Research question five assessed whether participants’ levels of CACREP accreditation awareness increased after enrollment in graduate-level counseling programs. Overall, the descriptive results indicated that participants’ awareness of CACREP accreditation increased after enrolling in a counseling program regardless of other factors (e.g., grade level, program accreditation status). The two-tailed dependent t test indicated that the mean score for CACREP accreditation awareness significantly increased for all students after enrollment in a graduate-level counseling program (M = 1.130, SD = 1.046, t(306) = 18.934; p < .001), with the following mean scores reported: prior to enrollment (n = 307), M = 3.11, SD = 0.975, and following enrollment (n = 307), M = 1.98, SD = 0.869.
The purpose of this research was to examine factors that influence students’ decisions regarding enrollment in graduate-level counseling programs, with specific attention to students’ knowledge of CACREP accreditation prior to and following enrollment. The findings of this study were congruent with previous research, indicating that counseling students deemed program location to be the most influential factor in their enrollment decision-making process (Poock & Love, 2001). A dearth of previous research existed on the role of program accreditation in enrollment decisions; the current study suggests that program accreditation status signifies the second most influential factor, reported by 14% of the participants surveyed. Across the sample, program accreditation ranked third among factors participants wished they had considered more prior to making an enrollment decision. For participants attending non-CACREP-accredited programs, the ranking of accreditation increased to the number one factor these students wished they had considered more (31.8%), closely followed by no other factors (30.8%). Results of this study suggest that while CACREP accreditation is important to some students when choosing a program, ultimately, enrollment decisions are influenced by a number of factors whose weight varies from student to student.
A critical finding emerging from this research is that nearly half of participants (45.1%) were not familiar with CACREP accreditation prior to enrollment in a graduate-level counseling program. In contrast, only 8.8% of students reported being very familiar with CACREP accreditation prior to enrollment. These results support the assertion that counseling students may lack information necessary to make an informed program enrollment choice. Specifically, if prospective students are not aware of the existence of accrediting bodies or the potential implications of CACREP accreditation for postgraduation opportunities, they may omit accreditation as a decision-making criterion for enrollment. The ranking of CACREP accreditation as the first and third most important factors that students in non-CACREP and CACREP programs, respectively, wished they had considered more appears to reflect this omission.
Relatedly, one third of participants reported being unaware of the importance of CACREP accreditation prior to enrollment in a graduate-level counseling program. Drastically, post-enrollment, less than 3% of participants reported lacking awareness of the importance of CACREP accreditation. Post-enrollment, the participants appeared to perceive CACREP accreditation as very important, with over half of the participants (53.6%) reporting this perception. Significant differences existed in participants’ awareness of CACREP accreditation prior to enrollment between participants enrolled in CACREP- and non-CACREP-accredited programs. A possible grounding for this finding may be that participants who were aware of CACREP accreditation prioritized this factor differently when making an enrollment decision. Regardless of the CACREP accreditation status of their graduate-level counseling programs, participants’ knowledge of CACREP accreditation increased significantly following program enrollment. This result suggests that accreditation is an effectively shared domain of professional socialization within counselor preparation programs, but largely not communicated to students outside formal entry into the field.
Overall, the results of this study provide a valuable window to the varied factors that prospective counseling students consider when making graduate program enrollment decisions. Interestingly, while accreditation signified an important factor in this decision-making process, many students lacked awareness of accreditation and subsequent implications of attending a CACREP-accredited program prior to enrollment. Post-enrollment, awareness of and importance ascribed to program accreditation increased for students, indicating that some students’ selection priorities changed with increased knowledge about accreditation. Ultimately, though enrollment decisions are personal choices in which students consider a number of factors, this study’s findings suggest that unfamiliarity with accreditation might impact the subsequent decisions.
Limitations and Recommendations for Further Research
Several limitations to this study must be noted. First, the results might have been biased by the use of a purposeful volunteer sample, with counseling program representatives electing whether to participate based on unknown motivations. Additionally, while the participation rate was ascertainable at the institutional level, the participation rate at the individual student level was unknown, as the number of students receiving the instrument at each participating institution was not collected. Second, the binary designation of CACREP-accredited and non-CACREP-accredited programs is broad and may not sufficiently account for rich variation across and within programs. For example, the research design did not account for programs working toward accreditation. Further, the use of self-reported program demographic information (e.g., accreditation status, institution name) may have impacted findings, as over 15% of participants preferred not to answer or gave incorrect data. Finally, data analysis did not address potential differences in participants’ responses across program cognate areas, full- and part-time enrollment statuses, or traditional and virtual program delivery formats. Future research may be informed by consideration of these demographic variables, as well as the possible relationship of students’ gender, age and race/ethnicity on graduate program enrollment decisions. Additionally, given that many participants lacked awareness of CACREP accreditation prior to enrollment, but ascertained this knowledge while enrolled, future research should examine specific educative venues through which students learn about CACREP accreditation prior to and following enrollment in graduate-level counseling programs. Results of research examining how counseling students become, or fail to become, knowledgeable about CACREP accreditation can inform outreach efforts. Qualitative examination of these questions, as well as of students’ lived experiences within and outside CACREP-accredited programs, would be particularly helpful. Examination of counselor educators’ levels of awareness of and importance ascribed to CACREP, within both accredited and non-accredited programs, also is suggested.
Implications for Counselor Preparation Programs and the Broader Profession
Results of this study suggest critical disparities among counseling students’ awareness and perceptions of CACREP accreditation prior to and following enrollment in graduate-level counseling programs. Considering the increased implications of accreditation within the counseling profession, this study’s findings substantiate a professional need to assist individuals in making optimally informed decisions about graduate school. Such an intervention moves beyond the individual student level, bringing renewed attention to the obligations of counselor preparation programs and professional associations. Though prospective students bear the responsibility of the enrollment decision, such an argument becomes confounded (and circular) when one considers that about 50% of students surveyed were unfamiliar with CACREP accreditation prior to graduate school enrollment.
This study supports Bardo’s (2009) assertion of the responsibility of programs to educate students about the benefits, challenges and rationale of accreditation. Transparent and educative dissemination of facts relative to the significance of accreditation is becoming paramount, particularly in light of new state-level requirements for licensure (License as a Professional Counselor, 2014) and continued movements toward portability, which may introduce new liabilities for programs not accredited by CACREP. Programs may wish to integrate such information about CACREP accreditation into recruitment processes and application materials, such as program websites, on-campus visits and open houses, and prospective student communications. The intention is to assist students in making well-informed decisions when choosing a counseling graduate program related to individual preferences and goals. For non-accredited programs, such transparent discussions may pose additional implications, considering that participants of this study deemed accreditation an important enrollment decision factor. However, because students prioritize enrollment decision factors differently, non-accredited programs still have the potential to attract students through their program’s prestige, philosophy, faculty, location and other factors that individuals prioritize.
Broader Professional Level
Among contemporary influences on the counseling profession, the TRICARE resolution is a particularly significant event. Graduation from a CACREP-accredited counselor preparation program increasingly differentiates students’ postgraduation employment and licensure opportunities. It is essential to recognize the differing, and potentially incongruent, contexts emerging for CACREP-accredited and non-CACREP-accredited programs. While complex, there is a clear need for proactive and inclusive dialogue across the profession that both minimizes potential collateral damage and maximizes the power of unified preparation standards for achievement of broader goals of professional recognition and licensure portability.
Results of this study lend support to the assertion that CACREP and other professional associations must find new ways of reaching out to non-accredited programs in order to assist them in recognizing the benefits and importance of accreditation, not only for their graduating students and individual institutions, but also for the counseling profession as a whole (Bobby, 2013). It also is essential that both financial support and mentorship continue to be provided to counselor preparation programs seeking and maintaining CACREP accreditation. Directed professional advocacy efforts to inform various stakeholders about the importance of CACREP accreditation as a national preparation standard also are recommended (Mascari & Webber, 2013).
The history of CACREP as an accrediting body has been and continues to be inextricably connected to broader movements of the counseling profession. Ultimately, the credibility and importance of CACREP accreditation remains grounded in the larger profession it serves. Ongoing respectful and critical dialogue related to CACREP is imperative within the general profession, and more specifically, with potential students of graduate-level counseling programs. Such transparent discussions are grounded by this study’s findings—although many students considered accreditation an influential factor when making enrollment decisions, nearly half of the participants sampled were unaware of accreditation prior to enrollment in a counseling graduate program. Assisting vested stakeholders, including institutions and students, in making informed decisions is an important part of the dialogue that is introduced through this research and invites subsequent conversation.
Conflict of Interest and Funding Disclosure
The authors reported receiving a grant contribution
from CACREP for the development of this manuscript.
Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Directions for Higher Education, 145, 47–58. doi:10.1002/he.334
Barrio Minton, C. A., & Gibson, D. M. (2012). Evaluating student learning outcomes in counselor education: Recommendations and process considerations. Counseling Outcome Research and Evaluation, 3, 73–91.
Bobby, C. L. (2013). The evolution of specialties in the CACREP standards: CACREP’s role in unifying the profession. Journal of Counseling & Development, 91, 35–43. doi:10.1002/j.1556-6676.2013.00068.x
Council for Accreditation of Counseling and Related Educational Programs. (2009). 2009 standards. Retrieved from http://www.cacrep.org/wp-content/uploads/2013/12/2009-Standards.pdf
Council for Accreditation of Counseling and Related Educational Programs. (2013). CACREP position statement on licensure portability for professional counselors. Retrieved from http://www.cacrep.org/wp-content/uploads/2014/02/CACREP-Policy-Position-on-State-Licensure-adopted-7.13.pdf
Council for Accreditation of Counseling and Related Educational Programs. (2014). Annual report: 2013. Retrieved from http://issuu.com/cacrep/docs/cacrep_2013_annual_report_full_fina
Davis, T., & Gressard, R. (2011, August). Professional identity and the 2009 CACREP standards. Counseling Today, 54(2), 46–47.
Hilston, J. (2006, April 24). Reasons influencing college choice in the US. Pittsburgh Post Gazette, p. A1.
Hossler, D., & Gallagher, K. S. (1987). Studying student college choice: A three-phase model and the implications for policymakers. College and University, 62, 207–221.
Ivy, J., & Naude, P. (2004). Succeeding in the MBA marketplace: Identifying the underlying factors. Journal of Higher Education Policy and Management, 26, 401–417. doi:10.1080/1360080042000290249
Johnson, E., Epp, L., Culp, C., Williams, M., & McAllister, D. (2013, July). What you don’t know could hurt your practice and your clients. Counseling Today, 56(1), 62–65.
Kallio, R. E. (1995). Factors influencing the college choice decisions of graduate students. Research in Higher Education, 36, 109–124. doi:10.1007/BF02207769
License as a Professional Counselor, 47 Ohio Rev. Code 232 § 4757.23 (2014).
Mascari, J. B., & Webber, J. (2013). CACREP accreditation: A solution to license portability and counselor identity problems. Journal of Counseling & Development, 91, 15–25. doi:10.1002/j.1556-6676.2013.00066.x
Milsom, A., & Akos, P. (2005). CACREP’s relevance to professionalism for school counselor educators. Counselor Education and Supervision, 45, 147–158. doi:10.1002/j.1556-6978.2005.tb00137.x
Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in Health Sciences Education, 15, 625–632. doi:10.1007/s10459-010-9222-y
Paradise, L. V., Lolan, A., Dickens, K., Tanaka, H., Tran, P., & Doherty, E. (2011, June). Program coordinators react to CACREP standards. Counseling Today, 53(12), 50–52.
Poock, M. C., & Love, P. G. (2001). Factors influencing the program choice of doctoral students in higher education administration. Journal of Student Affairs Research and Practice, 38, 203–223.
Ritchie, M., & Bobby, C. (2011, February). CACREP vs. the Dodo bird: How to win the race. Counseling Today, 53(8), 51–52.
TRICARE. (2014, October 31). Number of beneficiaries. Retrieved from http://www.tricare.mil/About/Facts/BeneNumbers.aspx?sc_database=web
Urofsky, R. I. (2013). The Council for Accreditation of Counseling and Related Educational Programs: Promoting quality in counselor education. Journal of Counseling & Development, 91, 6–14. doi:10.1002/j.1556-6676.2013.00065.x
Urofsky, R. I., Bobby, C. L., & Ritchie, M. (2013). CACREP: 30 years of quality assurance in counselor education: Introduction to the special section. Journal of Counseling and Development, 91, 3–5. doi:10.1002/j.1556-6676.2013.00064.x
Eleni M. Honderich, NCC, MAC, is an Adjunct Professor at the College of William and Mary. Jessica Lloyd-Hazlett, NCC, is an Assistant Professor at the University of Texas-San Antonio. Correspondence can be addressed to Eleni M. Honderich, College of William & Mary, School of Education, P.O. Box 8795, Williamsburg, VA 23187-8795, email@example.com.