222 The Professional Counselor | Volume 12, Issue 3 meaning of reliability coefficients is contingent upon the construct of measurement and the stakes or consequences of the results for test takers (Kalkbrenner, 2021a). The following tentative interpretative guidelines for adults’ scores on attitudinal measures were offered by Kalkbrenner (2021b) for coefficient alpha: α < .70 = poor, α > .70 to .84 = acceptable, α > .85 = strong; and for coefficient omega: ω < .65 = poor, ω > .65 to .80 = acceptable, ω > .80 = strong. It is important to note that these thresholds are for adults’ scores on attitudinal measures; acceptable internal consistency reliability estimates of scores should be much stronger for high-stakes testing. Construct Validity Evidence of Test Scores. Construct validity involves the test’s ability to accurately capture a theoretical or latent construct (AERA et al., 2014). Construct validity considerations are particularly important for counseling researchers who tend to investigate latent traits as outcome variables. At a minimum, counseling researchers should report construct validity evidence for both internal structure and relations with theoretically relevant constructs. Internal structure (aka factorial validity) is a source of construct validity that represents the degree to which “the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” (AERA et al., 2014, p. 16). Readers can refer to Kalkbrenner (2021b) for a free (open access publishing) overview of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) that is written in layperson’s terms. Relations with theoretically relevant constructs (e.g., convergent and divergent validity) are another source of construct validity evidence that involves comparing scores on the test in question with scores on other reputable tests (AERA et al., 2014; Strauss & Smith, 2009). Guidelines for Reporting Validity Evidence. Counseling researchers should report existing evidence of at least internal structure and relations with theoretically relevant constructs (e.g., convergent or divergent validity) for each instrument they use. EFA results alone are inadequate for demonstrating internal structure validity evidence of scores, as EFA is a much less rigorous test of internal structure than CFA (Kalkbrenner, 2021b). In addition, EFA results can reveal multiple retainable factor solutions, which need to be tested/confirmed via CFA before even initial internal structure validity evidence of scores can be established. Thus, both EFA and CFA are necessary for reporting/demonstrating initial evidence of internal structure of test scores. In an extension of internal structure, counselors should also report existing convergent and/or divergent validity of scores. High correlations (r > .50) demonstrate evidence of convergent validity and moderate-to-low correlations (r < .30, preferably r < .10) support divergent validity evidence of scores (Sink & Stroh, 2006; Swank & Mullen, 2017). In an ideal situation, a researcher will have the resources to test and report the internal structure (e.g., compute CFA firsthand) of scores on the instrumentation with their sample. However, CFA requires large sample sizes (Kalkbrenner, 2021b), which oftentimes is not feasible. It might be more practical for researchers to test and report relations with theoretically relevant constructs, though adding one or more questionnaire(s) to data collection efforts can come with the cost of increasing respondent fatigue. In these instances, researchers might consider reporting other forms of validity evidence (e.g., evidence based on test content, criterion validity, or response processes; AERA et al., 2014). In instances when computing firsthand validity evidence of scores is not logistically viable, researchers should be transparent about this limitation and pay especially careful attention to presenting evidence for cross-cultural fairness and norming. Cross-Cultural Fairness and Norming In a psychometric context, fairness (sometimes referred to as cross-cultural fairness) is a fundamental validity issue and a complex construct to define (AERA et al., 2014; Kane, 2010; Neukrug & Fawcett, 2015). I offer the following composite definition of cross-cultural fairness for the purposes of a
RkJQdWJsaXNoZXIy NDU5MTM1