Computer-based assessments

Our ongoing review of Common Core sample tasks from the SBAC and PARCC consortia and the State of Illinois, as well as questions that New York this year actually posed to students in the interim period before consortia assessments take over, has exposed issues with standards alignment, poor wording, incorrect mathematics, and odd interfaces, but no issue stands out more than this: none of the SBAC or PARCC extended tasks as of yet take advantage of technology’s capabilities in such a way to justify the transition to computer-based assessments.

Jason Becker, on his blog, characterized it this way:
[The SBAC tasks] represent the worst of computerized assessment. Rather than demonstrating more authentic and complex tasks, they present convoluted scenarios and even more convoluted input methods. Rather than present multimedia in a way that is authentic to the tasks, we see heavy language describing how to input what amounts to multiple choice or fill-in the blank answers. What I see here is not worth the investment in time and equipment that states are being asked to make, and it is hardly a ‘next generation’ set of items that will allow us to attain more accurate measures of achievement.
Computer-based mouse and keyboard-entry assessments face other obstacles.  Shifting from handwritten answers to typed answers to facilitate computer scoring isn’t a sufficient justification, especially when primary school students now have to struggle to learn and physically handle what was at one time a skill taught in junior high school: touch typing on a keyboard that was designed 140 years ago.  To some children, computers may simply be less familiar, creating a bias against “low-income students who text with ease in an age of ubiquitous wireless phones but generally have less exposure at home than their middle-class peers to computers.”

Computer-based assessments must have seemed cutting-edge to the old fogeys that drafted Common Core, but to many youngsters growing up nowadays with smart phones and tablets, computers are relics of their parents’ era.  We don’t mean to sound like Luddites—the opposite is true—but any high-tech company will tell you, and judges will confirm, laws aren’t close to keeping up with rapid advancements in technology.

It is perhaps easier to demonstrate how to do it right.  The NAEP is developing a Technology and Engineering Literacy assessment, to parallel the current ELA and mathematics assessments.  Here is an example of how multimedia and hyperlinking can be used to pose a scenario which follows a flowchart, not a linear path, where the person being assessed must make, change, and sometimes reverse previous decisions based on new information. (You may have to install a browser plug-in, but it’s worth it to see this.)

Assessments, when they become tech-based worthy, should be neither device-dependent nor exclusory.  Touch screens instead of a mouse, voice recognition instead of typing, or something new down the pike, assessments should advance in parallel with technology, but not be tied to it.  

Raising the bar shouldn’t come to mean imposing obstacles.


  1. This is what I have seen also...these computer tests are nothing more than glorified bubble tests...definitely not worth the millions of dollars needed to get a district up and running for them.

  2. As the new technology has come in the market so it has become more easy for the students to gather rush essay information about their query and this shows the loyalty of the people towards a particular task.