My editor’s gonna kill me: we just finalized the manuscript of my new book, BEYOND IQ, in which I interviewed top intelligence researchers about how to train skills like wisdom and willpower and problem solving that matter in the real world…and then I run across a study like this one, just published in the journal Intelligence. Basically, it says that IQ tests are great: they do a darn good job predicting which students will succeed in school. But the article wonders how much these IQ-like academic subjects and the tests that measure them continue to matter in the real world. In addition or as an alternative to traditional intelligence tests, the article evaluates measures of complex problem solving.
This is cool: instead of answering questions about trains leaving stations, vocabulary and block stacking patterns to measure IQ, the researchers propose “computer-based problem-solving scenarios called microworlds” to assess higher-order thinking skills. Designing these microworlds requires looking inside what makes higher-order thinking and, in fact, what goes on inside complex problem solving itself.
See, questions on IQ tests have answers: donkey and horse are to mule as lion and tiger are to liger. (Okay, I kind of love the liger, but that’s beside the point.) But in the real world, problems can consist of many, interconnected variables whose relationships are obscured and change over time, and goals can be unclear or even competing and contradictory – factors the authors of the Intelligence study cite as central to complex problem solving.
But pulling apart these ideas – traditional definitions of intelligence from complex problem solving – is tricky. As you’d expect, there’s significant overlap: people who are intelligent are generally good at CPS and vice a versa. But in what ways do these skills diverge? How is the intelligence measured by IQ and trained in school different from the skills of complex problem solving needed in the real world?
To answer that question researchers stuck 563 Luxembourgian high-schoolers in a genetics lab – or, at least in front of a free, downloadable microworld called Genetics Lab, which you should definitely check out if you have time. In GL, test-takers turn on and off “genes” and then click “next day” to discover how the combinations affect a fictional creature’s traits like IQ, Inventions and Ideas. It turns out to be a bit of a mish-mash and sorting out which genes create which abilities is a strange and complex soup.
Like many problems in the real world, GL not only requires manipulating information on the way to a goal, but sallying forth into the microworld to discover this information in the first place and using the information to create a mental model of how the system works. So Genetics Lab is scored three ways: Do students manipulate Genetics Lab in a way that creates unambiguous rules? Can they understand the relationships between genes and characteristics? And only finally, can they manipulate genes to get target characteristics?
Students were studious. Researchers researched. And then the researchers compared students’ three GL scores to their IQs, grades and whole bunch of other measures. What they found is cool, but only when you look at it with a bit of nuance. See, they found that Genetics Lab measures the same thing as the reasoning section of the standard IQ test. D’oh!
But, “Although they might not measure something different from reasoning scales, they measure it differently,” the authors write. And by measuring this reasoning or problem solving or whatever you want to call it differently, the researchers were able to see not only the endpoints but the waypoints that led there – they could measure problem solving process and not simply its product, and so answer not only the question of who is and is not good at reasoning/problem-solving, but what got them there.
Gathering information, making a mental model of how information drives results, and finally putting this model to work to get the results you want: that’s the process of complex problem solving. At this point, it ‘aint yet been pulled as a skill distinct from IQ-like reasoning. But a peek inside this process shows how results happen — and as a parent or teacher know this process may allow you to help your kids learn this complex and important skill.by