TheProfessional Counselor-Vol12-Issue3

The Professional Counselor | Volume 12, Issue 3 223 quantitative Measures section: the degree to which test construction, administration procedures, interpretations, and uses of results are equitable and represent an accurate depiction of a diverse group of test takers’ abilities, achievement, attitudes, perceptions, values, and/or experiences (AERA et al., 2014; Educational Testing Service [ETS], 2016; Kane, 2010; Kane & Bridgeman, 2017). Counseling researchers should consider the following central fairness issues when selecting or developing instrumentation: measurement bias, accessibility, universal design, equivalent meaning (invariance), test content, opportunity to learn, test adaptations, and comparability (AERA et al., 2014; Kane & Bridgeman, 2017). Providing a comprehensive overview of fairness is beyond the scope of this article; however, readers are encouraged to read Chapter 3 in the AERA standards (2014) on Fairness in Testing. In the Measures section, counseling researchers should include commentary on how and in what ways cross-cultural fairness guided their selection, administration, and interpretation of procedures and test results (AERA et al., 2014; Kalkbrenner, 2021b). Cross-cultural fairness and construct validity are related constructs (AERA et al., 2014). Accordingly, citing construct validity of test scores (see the previous section) with normative samples similar to the researcher’s target population is one way to provide evidence of cross-cultural fairness. However, construct validity evidence alone might not be a sufficient indication of cross-cultural fairness, as the latent meaning of test scores are a function of test takers’ cultural context (Kalkbrenner, 2021b). To this end, when selecting instrumentation, researchers should review original psychometric studies and consider the normative sample(s) from which test scores were derived. Commentary on the Danger of Using Self-Developed and Untested Scales Counseling researchers have an ethical duty to “carefully consider the validity, reliability, psychometric limitations, and appropriateness of instruments when selecting assessments” (ACA, 2014, p. 11). Quantitative researchers might encounter instances in which a scale is not available to measure their desired construct of measurement (latent/inferred variable). In these cases, the first step in the line of research is oftentimes to conduct an instrument development and score validation study (AERA et al., 2014; Kalkbrenner, 2021b). Detailing the protocol for conducting psychometric research is outside the scope of this article; however, readers can refer to the MEASURE Approach to Instrument Development (Kalkbrenner, 2021c) for a free (open access publishing) overview of the steps in an instrument development and score validation study. Adapting an existing scale can be option in lieu of instrument development; however, according to the AERA standards (2014), “an index that is constructed by manipulating and combining test scores should be subjected to the same validity, reliability, and fairness investigations that are expected for the test scores that underlie the index” (p. 210). Although it is not necessary that all quantitative researchers become psychometricians and conduct full-fledged psychometric studies to validate scores on instrumentation, researchers do have a responsibility to report evidence of the reliability, validity, and cross-cultural fairness of test scores for each instrument they used. Without at least initial construct validity testing of scores (calibration), researchers cannot determine what, if anything at all, an untested instrument actually measures. Data Analysis Counseling researchers should report and explain the selection of their data analytic procedures (e.g., statistical analyses) in a Data Analysis (or Statistical Analysis) subsection of the Methods or Results section (Giordano et al., 2021; Leedy & Ormrod, 2019). The placement of the Data Analysis section in either the Methods or Results section can vary between publication outlets; however, this section tends to include commentary on variables, statistical models and analyses, and statistical assumption checking procedures.

RkJQdWJsaXNoZXIy NDU5MTM1