Each response to another must be **at least 175 words** in length.

If asked to post your opinion or reaction/reflection as part of an initial discussion post, you must still incorporate reference support. While opinions and reactions are personal and subjective in nature, they need references and citations for support. This approach reflects knowledge acquisition as well as application and synthesis of content. In other words, always support your work with resources. **This is also applies to article reviews and/or reflection exercises…you still must have at least two references….one of which will always be your textbook.**

**(**Cohen, R. J., & Swerdlik, M. E. (2018). Psychological testing and assessment: An introductionto tests and measurement (9th ed.). New York, NY: McGraw-Hill)

(Peer response)

Cohen & Swerdlik (2018) referred to a number of reliability coefficients such as internal consistency reliability coefficient, alternative forms reliability coefficient, and test-retest reliability coefficient. These are used to ensure reliability in testing and measurement.

**Describe what these scores mean**

Internal consistency reliability coefficient = .92

This internal consistency reliability coefficient according to Cohen & Swerdlik (2018) is used “to evaluate the extent to which items on a scale relate to one another”. This 0.92 coefficient means that there is a high or strong relationship between each item on the new test THING to each other. This, therefore, shows that the test is reliable and can be accepted based on this strong level of reliability.

Alternate forms reliability coefficient = .82

Cohen & Swerdlik (2018) outlined that Alternate forms reliability coefficient is used “to evaluate the relationship between different forms of a measure”. This 0.82 coefficient means that there is a high or strong relationship between the different tests, THING 1 and THING 2. Hence there is good reliability that makes them acceptable.

Test-retest reliability coefficient = .50

Cohen & Swerdlik (2018) outlined that the Test-retest reliability coefficient is used “to evaluate the stability of a measure”. This 0.50 coefficient means that there is an average or reasonable level of stability or relationship between tests THING and THING 1.

**Interpret these results individually in terms of the information they provide on sources of error variance.**

Cohen & Swerdlik (2018) stated that “the various reliability coefficients do not all reflect the same sources of error variance. Thus, an individual reliability coefficient may provide an index of error from test construction, test administration, or test scoring and interpretation. A coefficient of inter-rater reliability, for example, provides information about error as a result of test scoring”. From this perspective, therefore, based on Cohen & Swerdlik the internal consistency reliability coefficient of .92 provides an index of error for test construction. Thealternate forms reliability coefficient of .82 provides an index of error for test construction and or administration whilst the test-retest reliability coefficient of .50 provides an index of error for test administration.

**Synthesize all of these interpretations into a final evaluation about this test’s utility or usefulness.**

Combining all of these reliability coefficients will provide an understanding of the test’s utility. Cohen & Swerdlik (2018) outlined that “the reliability coefficient helps the test developer build an adequate measuring instrument, and it helps the test user select a suitable test. However, the usefulness of the reliability coefficient does not end with test construction and selection. By employing the reliability coefficient in the formula for the standard error of measurement, the test user now has another descriptive statistic relevant to test interpretation, this one useful in estimating the precision of a particular test score”. So, all of these reliability coefficients would have to be worked into the formula for the standard error of measurement.

Cohen & Swerdlik (2018) further added that “the standard error of measurement, often abbreviated as SEM, provides a measure of the precision of an observed test score. Stated another way, it provides an estimate of the amount of error inherent in an observed score or measurement”.

**Explain whether these data are acceptable.**

Cohen & Swerdlik (2018) pointed out that “the relationship between the SEM and the reliability of a test is inverse; the higher the reliability of a test (or individual subtest within a test), the lower the SEM”. Based on this two of the reliability coefficients showed a strong or high relationship; namely, the Internal consistency reliability coefficient = .92 and the Alternate forms reliability coefficient = .82. The Test-retest reliability coefficient = .50 means that there is an average or reasonable level of stability or relationship. So, overall, we can conclude that the reliability of the tests was high. Based on the above premise, by Cohen & Swerdlik (2018) that “the relationship between the SEM and the reliability of a test is inverse”, it can be concluded that the SEM was low. So, because there was a low standard error of measurement (SEM), these data are acceptable.

This is supported by American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014) which noted that “the STEM is an indicator of a lack of consistency in the scores generated by the testing procedure for some population. A relatively large STEM indicates relatively low reliability/precision. The conditional standards error of measurement for a score level is the standard error of measurement at that score level”.

**Explain under what conditions they may not be acceptable and under what conditions, if any, they may be appropriate.**

These data may not be acceptable if the reliability coefficient is low. Low-reliability coefficient shows that SEM is high based on the inverse relationship. Cohen & Swerdlik (2018) pointed out that “the relationship between the SEM and the reliability of a test is inverse; the higher the reliability of a test (or individual subtest within a test), the lower the SEM”. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014) noted that “a relatively large STEM indicates relatively low reliability/precision”. On the other hand, high-reliability coefficient shows that SEM is low based on the inverse relationship.

Cohen & Swerdlik (2018) further added that “the standard error of measurement, often abbreviated as SEM, provides a measure of the precision of an observed test score. Stated another way, it provides an estimate of the amount of error inherent in an observed score or measurement”.