I'll skip the history of the IQ norm except for telling you that the name isn't even appropriate anymore in a literal way.
Optional for the interested:
+ Show Spoiler +
The original IQ by Stern was indeed a quotient of intelligence age and nominal age (IA/NA),
the intelligence age depended on the amount of solved problems for his age group and below.
If a kid solved as many problems as the average kid of his age, he was average aswell.
If a 8-year old kid solved the problems only the average 9-year old was able to solve, then his intelligence age was 9, his nominal age 7.
IQ=(9/7) x 100
Anyway, this method was only valid for people in development. With increasing age, the IQ diminished.
That's why Wechsler introducted the IQ as we know it today, which is nothing but a statistical expression of a value in a standardized scale.
You might know the the Gaussian distribution.
mean average=0; Standard deviation=1
Then you have IQ.
Then you have Centil.
The numbers are different, the meaning is the same, the position in the Gaussian distribution is the same.
What's different is the allowed precision of the norm.
Between 0 and 1, the z-norm can assume values with 2 decimal places.
The IQ norm can only be an integer and never goes under the value of 55 or over 145, in practice.
The Centil norm can only assume integer values between 1 and 9.
They are all listed in decreasing precision, which one you use depends on the test:
All you have to do is test a sufficient sample of people, sorted by age groups, optionally education level etc.
You calculate the average mean of the scores, the standard deviation, you choose the appropriate norm depending on the type (personality vs. performance test) and accuracy of the test (depends on the quality criteria, a very exact test allows for norms with more variations.
Choosing the wrong norm can cause the test to reflect a precision in outcome which isn't there or the opposite, being overly categorizing when there are more possible and valid variations of score.
Without going further into the test creation process, which i still have to learn in detail, I'll just say that fully grasping intelligence as a thing is impossible. You can get close to a real value, but in the end, you can score high in an IQ test and still fail in your career, although then it's more unlikely to happen.
A working sample is still the best method of predicting success in your job, followed by a group discussion.
So, whenever you refer to an IQ, rather refer to the test you got that value from.
I might aswell calculate an IQ from your income, which is also a good way to tell how successful you are, should income be your main target in your job.
Personally, i like the display of intelligence in this way:
(Source:+ Show Spoiler +
It doesn't display the connection between the areas.
As it is of now it looks like it assumes that all of the areas are independent from each other, which they surely aren't.
There's still a lot to find out about.For sure being smart at one thing doesn't exclude being smart at others like it's the case for savants with their 'island talents'.
Regarding the transformation of norms into each other:
You can describe any value in a ditribution with the formula
X=MA + z * SD
the standardized element in this is the 'z' cause it's always the position in a Gaussian distribution, the most formal illustration of the values a population can assume (speaking of population cause we're speaking of social sciences).
So, what about MA and SD?
Simple, every norm has a defined MA and SD, once you chose your norm, you insert the MA and SD of that norm into the formula and you obtain the position of the tested person in the distribution.
In practice, once you tested a person, you obtain a raw value for the complete test and the values of the respective subtests.
In the test manual you will find tables for different groups, but the most common differentiation is between age groups cause it's the most influential factor for intelligence (this would be a good point for discussing fluid vs. cristalline intelligence à la Cattell, but another blog pls).
Anyway, you look up the norm you obtain for the raw value in such a table.
Ok, let's say i scored a raw value of 116 points in an intelligence test.
I'm 23 years old and i look up the value in the respective table for the age range 21-30.
It says that my score equivals to a centil value of 7.
Since the centil scale has MA=5 and SD=2, one would be troubled to say if he's average or above average (5+-2 avg. range).
Before going on, let's go back to the formula from before.
Since this was just an intelligence test, people want to hear about an IQ norm, so we're gonna transform the centil norm into that.
X being the value i just obtained from testing:
X=MA + z * SD
7=5 + z * 2
z=1 (remember the Gaussian distribution?It has SD 1)
Now back to IQ:
X=100 + z * 15
The IQ i obtained from testing is 115.
Now is this average or above average?Same question for the C-norm.
The C-norm is above average.
The IQ-norm is average.
Weird isn't it?How's that.
If the C-norm of 7 wasn't above average (=significant), the next value to be sig. would be 8, but that would equival to
8=5 + z * 2
IQ=100 + 1.5 * 15=
which is way too high when converted into IQ (and z aswell).
Thus you have to treat 4 and 7 of the C-norm as significant already.
tl dr; conclusion:
IQ is not the same as intelligence.
IQ is a measuring tool among several others.
Saying you have an IQ of x is meaningless if you don't look at the underlying test you obtained it with.
Intelligence is measured with many methods and is still a controversial subject. Expect an own blog about it.I don't have the books at hand right now.