Gwen Bass, Ji Hee Lee, Craig Wells, John C. Carey, Sangmin Lee

The scale development and exploratory and confirmatory factor analyses of the Protective Factor Index (PFI) is described. The PFI is a 13-item component of elementary students’ report cards that replaces typical items associated with student behavior. The PFI is based on the Construct-Based Approach (CBA) to school counseling, which proposes that primary and secondary prevention activities of school counseling programs should focus on socio-emotional, development-related psychological constructs that are associated with students’ academic achievement and well-being, that have been demonstrated to be malleable, and that are within the range of expertise of school counselors. Teachers use the PFI to rate students’ skills in four construct-based domains that are predictive of school success. School counselors use teachers’ ratings to monitor student development and plan data-driven interventions.

 

Keywords: protective factors, factor analysis, school counselors, construct-based approach, student development

 

Contemporary models for school counseling practice (ASCA, 2012) emphasize the importance of school counselors using quantitative data related to students’ academic achievement to support professional decisions (Poynton & Carey, 2006), to demonstrate accountability (Sink, 2009), to evaluate activities and programs (Dimmitt, Carey, & Hatch, 2007), to advocate for school improvement (House & Martin, 1998) and to advocate for increased program support (Martin & Carey, 2014). While schools are data-rich environments and great emphasis is now placed on the use of data by educators, the readily available quantitative data elements (e.g., achievement test scores) are much better aligned with the work of classroom teachers than with the work of school counselors (Dimmitt et al., 2007). While teachers are responsible for students’ acquisition of knowledge, counselors are responsible for the improvement of students’ socio-emotional development in ways that promote achievement. Counselors need data related to students’ socio-emotional states (e.g., self-efficacy) and abilities (e.g., self-direction) that predispose them toward achievement so that they are better able to help students profit from classroom instruction and make sound educational and career decisions (Squier, Nailor, & Carey, 2014). Measures directly associated with constructs related to socio-emotional development are not routinely collected or used in schools. The development of sound and useful measures of salient socio-emotional factors that are aligned with the work of school counselors and that are strongly related to students’ academic success and well-being would greatly contribute to the ability of counselors to identify students who need help, use data-based decision making in planning interventions, evaluate the effectiveness of interventions, demonstrate accountability for results, and advocate for students and for program improvements (Squier et al., 2014).

 

Toward this end, we developed the Protective Factors Index (PFI) and describe herein the development and initial exploratory and confirmatory factors analyses of the PFI. The PFI is a 13-item component of elementary students’ report cards that replaces typical items associated with student deportment. The PFI is based on the Construct-Based Approach (CBA) to school counseling (Squier et al., 2014), which is based on the premise that primary and secondary prevention activities of school counseling programs should be focused on socio-emotional development-related psychological constructs that have been identified by research to be associated strongly with students’ academic achievement and well-being, that have been demonstrated to be malleable, and that are within the range of expertise of school counselors. The CBA clusters these constructs into four areas reflecting motivation, self-direction, self-knowledge and relationship competence.

 

The present study was conducted as collaboration between the Ronald H. Fredrickson Center for School Counseling Outcome Research and Evaluation and an urban district in the Northeastern United States. As described below, the development of the PFI was guided by the CBA-identified clusters of psychological states and processes (Squier et al., 2014). With input from elementary counselors and teachers, a 13-item report card and a scoring rubric were developed, such that teachers could rate each student on school counseling-related dimensions that have been demonstrated to underlie achievement and well-being. This brief measure was created with considerable input from the school personnel who would be implementing it, with the goal of targeting developmentally appropriate skills in a way that is efficient for teachers and useful for counselors. By incorporating the PFI into the student report card, we ensured that important and useful student-level achievement-related data could be easily collected multiple times per year for use by counselors. The purpose of this study was to explore relationships between the variables that are measured by the scale and to assess the factor structure of the instrument as the first step in establishing its validity. The PFI has the potential to become an efficient and accurate way for school counselors to collect data from teachers about student performance.

 

Method

 

Initial Scale Development

The PFI was developed as a tool to gather data on students’ socio-emotional development from classroom teachers. The PFI includes 13 items on which teachers rate students’ abilities related to four construct-based standards: motivation, self-direction, self-knowledge and relationships (Squier et al., 2014). These four construct clusters are believed to be foundational for school success (Squier et al., 2014). Specific items within a cluster reflect constructs that have been identified by research to be strongly associated with achievement and success.

 

The PFI assessment was developed through a collaborative effort between the research team and a group of district-level elementary school administrators and teachers. The process of creating the instrument involved an extensive review of existing standards-based report cards, socio-emotional indicators related to different student developmental level, and rating scales measuring identified socio-emotional constructs. In addition, representatives from the district and members of the research team participated in a two-day summer workshop in August of 2013. These sessions included school counselors and teachers from each grade level, as well as a teacher of English language learners, a special education representative, and principals. All participants, except the principals, were paid for their time. Once the draft PFI instrument was completed, a panel of elementary teachers reviewed the items for developmental appropriateness and utility. The scale was then adopted across the district and piloted at all four (K–5) elementary schools during the 2013–2014 school year as a component of students’ report cards.

 

The PFI component of the report card consists of 13 questions, which are organized into four segments, based on the construct-based standards: motivation (4 items), self-direction (2 items), self-knowledge (3 items) and relationships (4 items). The items address developmentally appropriate skills in each of these domains (e.g., demonstrates perseverance in completing tasks, seeks assistance when needed, works collaboratively in groups of various sizes). The format for teachers to evaluate their students includes dichotomous response options: “on target” and “struggling.” All classroom teachers receive the assessment and the scoring rubric that corresponds to their grade level. The rubric outlines the observable behaviors and criteria that teachers should use to determine whether or not a student demonstrates expected, age-appropriate skills in each domain. Because the PFI instrument is tailored to address developmentally meaningful competencies, three rubrics were developed to guide teacher ratings at kindergarten and first grade, second and third grade, and fourth and fifth grade.

 

At the same time that the PFI scale was developed, the district began using a computer-based system to enter report card data. Classroom teachers complete the social-emotional section of the standards-based report card electronically at the close of each marking period, when they also evaluate students’ academic performance. The data collected can be accessed and analyzed electronically by school administrators and counselors. Additionally, data from two marking periods during the 2013–2014 school year were exported to the research team for analysis (with appropriate steps taken to protect students’ confidentiality). These data were used in the exploratory and confirmatory factor analyses described in this paper.

 

Sample

The PFI was adopted across all of the school district’s four elementary schools, housing grades kindergarten through fifth. All elementary-level classroom teachers completed the PFI for each of the students in their classes. The assessment was filled out three times during the 2013–2014 school year, namely in December, March and June. The data collected in the fall and winter terms were divided into two sections for analysis. Data from the December collection (N = 1,158) was used for the exploratory factor analysis (EFA) and data from the March collection was randomly divided into two subsamples (subsample A = 599 students and subsample B = 591 students) in order to perform the confirmatory factor analysis (CFA).

 

The sample for this study was highly diverse: 52% were African American, 17% were Asian, 11% were Hispanic, 16% were Caucasian, and the remaining students identified as multi-racial, Pacific Islander, Native Hawaiian, or Native American. In the EFA, 53.2% (n = 633) of the sample were male and 46.8% (n = 557) of the sample were female. Forty-seven kindergarten students (3.9%), 242 first-grade students (20.3%), 216 second-grade students (18.2%), 222 third-grade students (18.7%), 220 fourth-grade students (18.5%), and 243 fifth-grade students (20.4%) contributed data to the EFA.

 

The first CFA included data from 599 students, 328 males (54.8%) and 271 females (45.2%). The data included 23 kindergarten students (3.8%), 136 first-grade students (22.7%), 100 second-grade students (16.7%), 107 third-grade students (17.9%), 102 fourth-grade students (17.0%), and 131 fifth-grade students (21.9%). The data analyzed for the second CFA included assessments of 591 students, 305 males (51.6%) and 286 females (48.4%). The data consisted of PFI assessments from 24 kindergarten students (4.1%), 106 first-grade students (17.9%), 116 second-grade students (19.6%), 115 third-grade students (19.5%), 118 fourth-grade students (20.0%), and 112 fifth-grade students (19.0%).

 

Procedures

Classroom teachers completed PFI assessments for all students in their class at the close of each marking period using the rubrics described above. Extracting the data from the district’s electronic student data management system was orchestrated by the district’s information technology specialist in collaboration with members of the research team. This process included establishing mechanisms to ensure confidentiality, and identifying information was extracted from student records.

 

Data Analyses

The PFI report card data was analyzed in three phases. The first phase involved conducting an EFA at the conclusion of the first marking period. The second phase was to randomly select half of the data compiled during the second marking period and perform a confirmatory factor analysis. Finally, the remaining half of the data from the second marking period was analyzed through another CFA.

 

Phase 1. Exploratory factor analysis. An initial EFA of the 13 items on the survey instrument was conducted using the weighted least squares mean adjusted (WLSM) estimation with the oblique rotation of Geomin. The WLSM estimator appropriately uses tetrachoric correlation matrices if items are categorical (Muthén, du Toit, & Spisic, 1997). The EFA was conducted using Mplus version 5 (Muthén & Muthén, 1998–2007).

 

Model fit was assessed using several goodness-of-fit indices: comparative fit index (CFI), Tucker-Lewis Index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). We assessed model fit based on the following recommended cutoff values from Hu and Bentler (1999): CFI and TLI values greater than 0.95, RMSEA value less than 0.06, and SRMR value less than 0.08.

 

     Phase 2. First confirmatory factor analysis. An initial CFA was conducted on the 13 items from the instrument survey to assess a three-factor measurement model that was based on theory and on the results yielded through the exploratory analysis. Figure 1 provides the conceptual path diagram for the measurement model. Six items (3, 4, 6, 7, 11 and 13) loaded on factor one (C1), which is named “academic temperament.” Three items (8, 9 and 12) loaded on factor two (C2), which is referred to as “self-knowledge.” Four items (1, 2, 5 and 10) loaded on factor three (C3), which is titled “motivation.” All three latent variables were expected to be correlated in the measurement model.

 

This CFA was used to assess the measurement model with respect to fit as well as convergent and discriminant validity. Large standardized factor loadings, which indicate strong inter-correlations among items associated with the same latent variable, support convergent validity. Discriminant validity is evidenced by correlations among the latent variables that are less than the standardized factor loadings; that is, the latent variables are distinct, albeit correlated (see Brown, 2006; Kline, 2011; Schumacker & Lomax, 2010).

 

The computer program Mplus 5 (Muthén & Muthén, 1998-2007) was used to conduct the CFA with weighted least square mean and variance adjusted (WLSMV) estimation. This is a robust estimator for categorical data in a CFA (Brown, 2006). For the CFA, Mplus software provides fit indices of a given dimensional structure that can be interpreted in the same way as they are interpreted when conducting an EFA.

 

     Phase 3. Second confirmatory factor analysis. A second CFA was conducted for cross-validation. This second CFA was conducted on the 13 items from the instrument survey to assess a three-factor measurement model that was based on the results yielded through the first confirmatory factor analysis. The same computer program and estimation tactics were used to conduct the second CFA.


Results

 

Phase 1. Exploratory Factor Analysis

Complete descriptive statistics for the responses to each of the 13 items are presented in Table 1. The response categories for all questions are dichotomous and also identified in Table 1 as “On Target” or “Struggling,” while incomplete data are labeled “Missing.” A total of 1,158 surveys were analyzed through the EFA. The decision to retain factors was initially guided by visually inspecting the scree plot and eigenvalues. The EFA resulted in two factors with eigenvalues greater than one (one-factor = 8.055, two-factor = 1.666, and three-factor = 0.869). In addition, the scree test also supported the idea that two factors were retained because two factors were left of the point where the scree plot approached asymptote. However, considering goodness-of-fit indices, the models specifying a three-factor structure and four-factor structure fit the data well. Methodologists have suggested that “underfactoring” is more problematic than “overfactoring” (Wood, Tataryn, & Gorsuch, 1996). Thus, there was a need to arrive at a factor solution that balanced plausibility and parsimony (Fabrigar, Wegener, MacCallum, & Strahan, 1999).

Methodologists (e.g., Costello & Osborne, 2005; Fabrigar et al., 1999) have indicated that when the number of factors to retain is unclear, conducting a series of analyses is appropriate. Therefore, two-, three-, and four-factor models were evaluated and compared to determine which model might best explain the data in the most parsimonious and interpretable fashion. In this case, the two-factor model was eliminated because it did not lend itself to meaningful interpretability. The four-factor model was excluded because one of the factors was related to only one item, which is not recommended (Fabrigar et al., 1999). Researchers evaluated models based on model fit indices, item loadings above 0.40 (Kahn, 2006), and interpretability (Fabrigar et al., 1999).

 

The three-factor measurement model fit the data well (RMSEA = 0.052, SRMR = 0.036, CFA = 0.994, TLI = 0.988, χ2 = 173.802, df = 42, p < 0.001). As shown in Table 2, the standardized factor loadings were large, ranging from 0.58 to 0.97. The first factor included six items. Items reflected students’ abilities at emotional self-control and students’ abilities to maintain good social relationships in school (e.g., demonstrates resilience after setbacks and works collaboratively in groups of various sizes). This first factor was named “academic temperament.”
The second factor included three items. All of the items reflected the understanding that students have about their own abilities, values, preferences and skills (e.g., identifies academic strengths and abilities and identifies things the student is interested in learning). This second factor was named “self-knowledge.” The third factor included four items. All of the items reflected personal characteristics that help students succeed academically by focusing and maintaining energies on goal-directed activities (e.g., demonstrates an eagerness to learn and engages in class activities). This third factor was named “motivation.” The three-factor measurement model proved to have parsimony and interpretability.

 

The two-factor model did not fit the data as well as the three-factor model (RMSEA = 0.072, SRMR = 0.058, CFA = 0.985, TLI = 0.978, χ2 = 371.126, df = 53, p < 0.001). As shown in Table 2, the standardized factor loadings were large, ranging from 0.59 to 0.94. The first factor included seven items. This first factor reflected self-knowledge and motivation. It was more appropriate to differentiate self-knowledge and motivation considering interpretability. The two-factor model provided relatively poor goodness-of-fit indices and interpretability.

 

The four-factor model fit the data slightly better than the three-factor model (RMSEA = 0.035, SRMR = 0.023, CFA = 0.998, TLI = 0.995, χ2 = 76.955, df = 32, p < 0.001). As shown in Table 2, the standardized factor loadings were large, ranging from 0.54 to 1.01. The first factor included one item, however, and retained factors should include at least three items that load 0.05 or greater (Fabrigar et al., 1999), so the first factor was removed. The second factor was comprised of six items that all relate to the construct of academic temperament. The third factor includes four items that reflect motivation. The fourth factor is composed of three items that relate to self-knowledge. The four-factor model was strong in terms of goodness-of-fit indices, though it was not possible to retain the first factor methodologically, due to the fact that it only involved one item. Therefore, given a series of analyses, the three-factor model was selected as the most appropriate.

 

Phase 2. First Confirmatory Factor Analysis

Complete descriptive statistics for the items are presented in Table 3. The responses for all items were dichotomous. A total of 569 (95.0%) of 599 surveys were completed and were used in the first CFA.

 

 

 

 

The three-factor measurement model provided good fit to the data (RMSEA = 0.059, CFI = 0.974, TLI = 0.984, χ2 = 104.849, df = 35, p < 0.001). Table 4 reports the standardized factor loadings, which

can be interpreted as correlation coefficients, for the three-factor model. The standardized factor loadings were statistically significant (p < 0.001) and sizeable, ranging from 0.72 to 0.94. The large standardized factor loadings support convergent validity in that each indicator was primarily related to the respective underlying latent variable. Table 5 reports the correlation coefficients among the three latent variables. The correlation coefficients were less than the standardized factor loadings, thus supporting discriminant validity.

 

 

 

Phase 3. Second Confirmatory Factor Analysis

Complete descriptive statistics for the items are presented in Table 3. The type of responses for all items was dichotomous. A total of 564 (95.4%) of 591 surveys had all the items complete and were used in the first CFA.

 

The second CFA was conducted on the three-factor measurement model to cross-validate the results from the first CFA. The three-factor model provided acceptable fit to the data in this second CFA (RMSEA = 0.055, CFI = 0.976, TLI = 0.983, χ2 = 100.032, df = 37, p < 0.001). Table 4 reports the standardized factor loadings, which can be interpreted as correlation coefficients, for the three-factor model. The standardized factor loadings were significantly large, ranging from 0.70 to 0.93. These large standardized factor loadings support convergent validity in that each indicator was largely related to the respective underlying latent variable. Table 5 reports the correlation coefficients among the three latent variables. The correlation coefficients were less than the standardized factor loadings so that discriminant validity was supported. Given these results, it appears that the three-factor model is the most reasonable solution.

 

Discussion

 

The ASCA National Model (2012) for school counseling programs underscores the value of using student achievement data to guide intervention planning and evaluation. This requires schools to find ways to collect valid and reliable information that provides a clear illustration of students’ skills in areas that are known to influence academic achievement. The purpose of developing the PFI was to identify and evaluate socio-emotional factors that relate to students’ academic success and emotional health, and to use the findings to inform the efforts of school counselors. The factor analyses in this study were used to explore how teachers’ ratings of students’ behavior on the 13-item PFI scale clustered around specific constructs that research has shown are connected to achievement and underlie many school counseling interventions. Because the scoring rubrics are organized into three grade levels (kindergarten and first grade, second and third grade, and fourth and fifth grade), the behaviors associated with each skill are focused at an appropriate developmental level. This level of detail allows teachers to respond to questions about socio-emotional factors in ways that are consistent with behaviors that students are expected to exhibit at different ages and grade levels.

 

Considering parsimony and interpretability, the EFA and two CFAs both resulted in the selection of a three-factor model as the best fit for the data. Through the EFA, we compared two-, three- and four-factor models. The three-factor model showed appropriate goodness-of-fit indices, item loadings and interpretability. Additionally, the two CFAs demonstrated cross-validation of the three-factor model. In this model, the fundamental constructs associated with students’ academic behavior identified are “academic temperament,” “self-knowledge,” and “motivation.” “Self-knowledge” and “motivation” correspond to two of the four construct clusters identified by Squier et al. (2014) as critical socio-emotional dimensions related to achievement. The “academic temperament” items reflected either self-regulation skills or the ability to engage in productive relationships in school. Squier et al. (2014) differentiated between self-direction (including emotional self-regulation constructs) and relationship skills clusters.

 

Although not perfectly aligned, this factor structure of the PFI is consistent with the CBA model for clustering student competencies and corresponds to previous research on the links between construct-based skills and academic achievement. Teacher ratings on the PFI seemed to reflect their perceptions that self-regulation abilities and good relationship skills are closely related constructs. These results indicate that the PFI may be a useful instrument for identifying elementary students’ strengths and needs in terms of exhibiting developmentally appropriate skills that are known to influence academic achievement and personal well-being.

 

Utility of Results

The factor analysis conducted in this study suggests that the PFI results in meaningful data that can allow for data-based decision making and evaluation. This tool has possible implications for school counselors in their efforts to provide targeted support, addressing the academic and socio-emotional needs of elementary school students. The PFI can be completed in conjunction with the academic report card and it is minimally time-intensive for teachers. In addition to school-based applications, the socio-emotional information yielded is provided to parents along with their child’s academic report card. This has the potential to support school–home connections that could prove useful in engaging families in interventions, which is known to be beneficial. Finally, the instrument can help school counselors identify struggling students, create small, developmentally appropriate groups based on specific needs, work with teachers to address student challenges that are prevalent in their classrooms, evaluate the success of interventions, advocate for program support, and share their work with district-level administrators. The PFI could come to be used like an early warning indicator to identify students who are showing socio-emotional development issues that predispose toward disengagement and underachievement.

 

The PFI also may prove useful as a school counseling evaluation measure. Changes on PFI items (and perhaps on subscales related to the three underlying dimensions identified in the present study) could be used as data in the evaluation of school counseling interventions and programs. Such evaluations would be tremendously facilitated by the availability of data that is both within the domain of school counselors’ work and that is known to be strongly related to achievement.

 

The findings offer great promise in terms of practical implications for school personnel and parents. This analysis quite clearly illustrates “academic temperament,” “self-knowledge” and “motivation” as factors that are demonstrated to be foundational to school success. The results indicate that the teachers’ ratings of students’ behavior align with findings of existing research and, thus, that the instrument is evaluating appropriate skills and constructs.

 

Implications for School Counselors

The PFI was developed as a data collection tool that could be easily integrated into schools for the purpose of assessing students’ development of skills that correspond to achievement-related constructs. Obtaining information about competencies that underlie achievement is critical for school counselors, who typically lead interventions that target such skills in an effort to improve academic outcomes. Many developmental school counseling curricula address skills that fall within the domains of “academic temperament,” “self-knowledge,” and “motivation” (see: http://www.casel.org/guide/programs for a complete list of socio-emotional learning programs). Teachers can complete the PFI electronically, at the same intervals as report cards and in a similarly user-friendly format. Therefore, the PFI facilitates communication between teachers and school counselors regularly throughout the school year. Counselors can use the data to identify appropriate interventions and to monitor students’ responsiveness to school counseling curricula over time and across settings. Although not included in this analysis, school counselors could also measure correlations between PFI competencies and achievement to demonstrate how academic outcomes are impacted by school counseling interventions and curricula.

 

Limitations and Further Study

Despite the promising findings on these factor analyses, further research is needed to confirm these results and to address the limitations of the present study. Clearly, additional studies are needed to confirm the reliability of PFI teacher ratings and future research should explore inter-rater reliability. Further research also is needed to determine if reliable and valid PFI subscales can be created based on the three dimensions found in the present study. Test-retest reliability, construct validity and subscale inter-correlations should be conducted to determine if PFI subscales with adequate psychometric characteristics can be created. Subsequent studies should consider whether students identified by the PFI as being in need of intervention also are found by other measures to be in need of support. Another important direction for future research is to examine the relationships between teachers’ ratings of students’ socio-emotional skills on the PFI and the students’ academic performance. Establishing a strong link between the PFI and actual academic achievement is an essential step to documenting the potential utility of the index as a screening tool. As this measure was developed to enhance data collection for data-based decision making, future research should explore school counselors’ experiences with implementation as well as qualitative reporting on the utility of PFI results for informing programming.

 

Although the present study suggests that the PFI in its current iteration is quite useful, practically speaking, researchers may consider altering the tool in subsequent iterations. One possible revision involves changing the format from dichotomous ratings to a Likert scale, which could allow for teachers to evaluate student behavior with greater specificity and which would benefit subscale construction. Another change that could be considered is evaluating the rubrics to improve the examples of student behavior that correspond to each rating on the scale and to ensure that each relates accurately to expectations at each developmental level. Furthermore, most of the items on the current PFI examine externalizing behaviors, which poses the possibility that students who achieve at an academically average level, but who experience more internalizing behaviors (such as anxiety), might not be identified for intervention. Subsequent iterations of the PFI could include additional areas of assessment, such as rating school behavior that is indicative of internalized challenges. Finally, it will be important to evaluate school counselors’ use of the PFI to determine if it actually provides necessary information for program planning and evaluation in an efficient, cost-effective fashion as is intended.

Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest

or funding contributions for the development

of this manuscript.

 

 


References

 

American School Counselor Association. (2012). The ASCA National Model: A Framework for School Counseling
Programs
(3rd ed.). Alexandria, VA: Author.

Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford.

Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for
getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(7), 1–9.

Dimmitt, C., Carey, J. C., & Hatch, T. (Eds.) (2007). Evidence-based school counseling: Making a difference with data-driven practices. Thousand Oaks, CA: Corwin.

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory
factor analysis in psychological research. Psychological Methods4, 272–299.
doi:10.1037//1082-989X.4.3.272

House, R. M., & Martin, P. J. (1998). Advocating for better futures for all students: A new vision for school
counselors. Education, 119, 284–291.

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional
criteria versus new alternatives. Structural Equation Modeling6, 1–55. doi:10.1080/10705519909540118

Kahn, J. H. (2006). Factor analysis in counseling psychology research, training, and practice – principles,
advances, and applications. The Counseling Psychologist34, 684–718. doi:10.1177/0011000006286347

Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY: Guilford.

Martin, I., & Carey, J. (2014). Development of a logic model to guide evaluations of the ASCA National Model
for School Counseling Programs. The Professional Counselor, 4, 455–466. doi:10.15241/im.4.5.455

Muthén, B. O., du Toit, S. H. C., & Spisic, D. (1997). Robust inference using weighted least squares and
quadratic estimating equations in latent variable modeling with categorical and continuous
outcomes. Psychometrika75, 1–45.

Muthén, L. K., & Muthén, B. O. (1998–2007). Mplus user’s guide (5th ed.). Los Angeles, CA: Muthén & Muthén.

Poynton, T. A., & Carey, J. C. (2006). An integrative model of data-based decision making for school
counseling. Professional School Counseling10, 121–130.

Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed.). New York,
NY: Routledge.

Sink, C. A. (2009). School counselors as accountability leaders: Another call for action. Professional School
Counseling
13, 68–74. doi:10.5330/PSC.n.2010-13.68

Squier, K. L., Nailor, P., & Carey, J. C. (2014). Achieving excellence in school counseling through motivation, self-
direction, self-knowledge and relationships
. Thousand Oaks, CA: Corwin.

Wood, J. M., Tataryn, D. J., & Gorsuch, R. L. (1996). Effects of under-and overextraction on principle axis factor
analysis with varimax rotation. Psychological methods1, 354–365. doi:10.1037//1082-989X.1.4.354

 

 

Gwen Bass is a doctoral researcher at the Ronald H. Fredrickson Center for School Counseling Outcome Research at the University of Massachusetts. Ji Hee Lee is a doctoral student at Korea University in South Korea and Center Fellow of the Ronald H. Frederickson Center for School Counseling Outcome Research at the University of Massachusetts. Craig Wells is an Associate Professor at the University of Massachusetts. John C. Carey is a Professor of School Counseling and the Director of the Ronald H. Frederickson Center for School Counseling Outcome Research at the University of Massachusetts. Sangmin Lee is an Associate Professor at Korea University. Correspondence can be addressed to Gwen Bass, School of Cognitive Science, Adele Simmons Hall, Hampshire College, 893 West Street, Amherst, MA 01002, gjbass@gmail.com.