Home
/
Medical
/
I have designed a new intelligence test and want to make sure that it is a good test. So, I have a group of students take the test and then I look to see if all of the individual items are highly correlated.I am testing for: a. construct validity b. internal consistency c. content validity d. interjudge reliability e. criterion validity

Question

I have designed a new intelligence test and want to make sure that it is a good test. So, I have a group of students take the test and then I look to see if all of the individual items are highly correlated.I am testing for: a. construct validity b. internal consistency c. content validity d. interjudge reliability e. criterion validity

I have designed a new intelligence test and want to make sure that it is a good test. So, I
have a group of students take the test and then I look to see if all of the individual items
are highly correlated.I am testing for:
a. construct validity
b. internal consistency
c. content validity
d. interjudge reliability
e. criterion validity

Solution

expert verifiedExpert Verified
4.4(181 Voting)
avatar
AdalynnElite · Tutor for 8 years

Answer

The correct answer is **b. internal consistency**.<br /><br />Here's why:<br /><br />* **Internal consistency** refers to the degree to which different items on a test measure the same underlying construct. A high correlation between individual test items indicates that they are likely measuring the same thing, thus demonstrating good internal consistency. This is often measured using Cronbach's alpha.<br /><br />Let's look at why the other options are incorrect:<br /><br />* **Construct validity:** This refers to whether the test actually measures the theoretical construct it is designed to measure. Simply having items correlate with each other doesn't prove they measure the intended construct.<br /><br />* **Content validity:** This assesses whether the test items adequately cover the full range of the construct being measured. High correlation between items doesn't necessarily mean the test covers all aspects of the construct.<br /><br />* **Interjudge reliability:** This refers to the consistency of scores when different raters or judges administer and score the test. This concept isn't relevant when just looking at item correlations within a single test administration.<br /><br />* **Criterion validity:** This examines how well the test scores predict an external criterion or outcome. This type of validity requires comparing test scores to an independent measure of the construct.<br />
Click to rate: