Skip to main content
Skip Navigation

​​​​​​​​​​​NAEP Technical Documentation​NAEP Scoring

         

Preliminary Activities

Training for Scoring

Scoring Monitoring

Two types of cognitive items are scored for NAEP, selected-response items​ and constructed-response items. ​Selected-response item responses are captured by high-speed scanners during student booklet processing for paper-based assessments and processed electronically for digitally based assessments. Because selected-response items have a finite number of possible responses, NAEP allows analysis via algorithmic scoring for most of these items. Data capture capabilities enable each of these responses to be mapped to the appropriate score level in the rubric, thus allowing these items to be algorithmically scored. Short constructed-response items​ (typically those with two or three valid score points) and extended constructed-response items (typically those with four or more valid score points) are scored by trained scoring personnel. Unless otherwise noted, the term "scoring" in this section refers to constructed-response items.

Scoring a large number of short and extended constructed-response items with a high level of accuracy and reliability within a limited time frame is essential to the success of NAEP. To ensure reliable and efficient scoring of constructed-response items, the following steps are taken:​

  • develops focused and explicit scoring guides that match the criteria delineated in the assessment frameworks;

  • recruits qualified and experienced scorers, trains them, and verifies their ability to score particular questions through qualifying tests;

  • employs an image-processing and scoring system that routes images of student responses directly to the scorers so they can focus on scoring rather than paper routing;

  • monitors scorer consistency through ongoing reliability checks, including second scoring;

  • assesses the quality of scorer decision-making through frequent monitoring by NAEP assessment experts; and

  • documents all training, scoring, and quality control procedures in the technical reports.

The table below presents a general overview of recent NAEP scoring activities.

Processing and scoring totals, national and state assessments, by subject area: Various years, 2000–2022
YearSubject areaGradeNumber of booklets scoredNumber of constructed responses scoredNumber of individual 
constructed-response items
Number of team leadersNumber of scorers
NOTE: Number of constructed-response items includes bilingual itemsThe 2014 TEL assessment and the 2011 writing assessment were computer-based. For TEL and 2011 writing, "Number of booklets scored" denotes number of digital test forms scored. The 2017 assessments were digitally based. Rows covering multiple grades include data from multiple grades. The term "team leaders" refers to the number of scoring supervisors.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), various years, 2000–2022 Assessments.
2022Mathematics4, 8235,3352,062,55528220128
Reading4, 8221,1911,495,3279354231
Civics87,88181,374511055
U.S. history88,08185,506651056​
2019Mathematics4, 8, 12385,3803,386,23033320156
Reading4, 8, 12308,2082,106,69616135327
Science4, 8, 1258,367556,91425825152
2018Civics86,33964,49751537
Geography86,41567,72550537
U.S. history88,28388,01265538
Technology and engineering literacy (TEL )815,369 152,61481552
2017Mathematics4, 8294,8672,919,38814414141
Reading4, 8290,8422,037,0488514152
Writing4, 847,11295,4124411112
2016Arts88,767201,24992979
2015Mathematics4, 8, 12
290,9713,013,93725917125
Reading4, 8, 12311,5642,174,46013726235
Science4, 8, 12237,5712,543,24425126211
2014Civics89,12573,08250321
Geography89,00685,60658421
U.S. history811,279108,55268435
Technology and engineering literacy (TEL )821,579 269,86798644
2013Mathematics4, 8, 12386,0643,977,28533316182
Reading4, 8, 12384,2722,782,99113627304
2012Economics1210,95075,22934319
2011Mathematics4, 8388,6383,786,42217219151
Reading4, 8382,2052,819,9509023256
Science8122,4091,544,6699617178
Writing8, 1252,452104,9584417183
2010Civics4, 8, 1226,771261,98911923153
Geography4, 8, 1226,608366,54317223153
U.S. history4, 8, 1230,987387,62516723153
2009Mathematics4, 8, 12380,0424,293,56129816175
Reading4, 8, 12392,1963,709,29931130336
Science4, 8, 12331,9674,592,47041245430
2008Arts87,865181,85492657
2007Mathematics4, 8422,2003,912,83543538187
Reading4, 8457,8003,623,12634651362
Writing8, 12205,500729,9404050328
2006U.S. history4, 8, 1238,400458,1721322165
Civics4, 8, 1233,200282,977842065
Economics1217,600128,73532830
2005Mathematics4, 8, 12354,5004,435,83141426267
Reading4, 8, 12340,2003,773,69122636363
Science4, 8, 12349,1004,424,51153939393
2003Mathematics4, 8349,6004,719,46413533418
Reading4, 8350,7003,913,14713632397
2002Reading4, 8, 12308,5004,023,86115033330
Writing4, 8, 12285,900608,2696029270
2001Geography4, 8, 1227,500381,47757981
U.S. history4, 8, 1232,700399,182479 81
2000Mathematics4, 8, 12253,9003,856,21119916177
Reading48,500123,1004614702
Science4, 8, 12240,9004,398,02129520155

 

The table below presents a general overview of recent NAEP long-term trend scoring activities.

Processing and scoring totals, long-term trend assessments, by subject area: Various years, 2004–2023
YearSubject area​ AgeNumber of
booklets
scored
Number of
constructed
responses scored
Number of
individual
constructed-response items
2023Mathematics long-term trend138,712154,06175
Reading long-term trend138,73617,5086
​2022Mathematics long-term trend97,479135,21172
Reading long-term trend97,4327,4473
2020Mathematics long-term trend9, 1317,363309,110147
Reading long-term trend9, 1317,32826,2719
2012Mathematics long-term trend9, 13, 1726,210422,192181
Reading long-term trend9, 13, 1726,35247,24119
2008Mathematics long-term trend9, 13, 1728,465452,994179
Reading long-term trend9, 13, 1726,62151,74319
2004Mathematics long-term trend9, 13, 1740,3001,082,923219
Reading long-term trend​9, 13, 1741,200131,49634
NOTE: Number of constructed responses scored includes second scores for the same items that were scored by the second scorers. Numbers of team leaders and scorers are not included because long-term trend scoring occurs at multiple sessions throughout the year. Rows covering multiple ages include data from multiple ages.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), ​various years, 2004–​2023 ​Mathematics and Reading Long-Term Trend Assessments.

 

As new NAEP items are created, tested, and improved, test development staff create scoring guides using a range of actual student responses captured by the materials-processing staff as specific examples. The scoring and test development staffs create training materials matching the assessment framework criteria. For future assessments, continuous documentation ensures that the scoring staff will train and score the item in the same way that it was originally implemented. This consistency of scoring procedures allows NAEP to report on trends in student performance over time.

NAEP Scoring Staff

Scorers score student responses. Scoring supervisors provide logistical support to the trainers and help monitor team activities. Trainers are responsible for training both scorers and supervisors on specific content and for ensuring that team scoring performance meets expectations. Content leads for each subject area (reading, science, etc.) oversee the trainers and provide support as needed.

Scorers must have a minimum of a baccalaureate degree from a four-year college or university. An advanced degree and scoring experience and/or teaching experience are preferred. In some subject areas, scorers must complete a placement test, used as a tool to identify scorers with appropriate content knowledge. During the training process, scoring teams are trained so that each student response can be scored consistently. Following training, for all extended constructed-response items and some short constructed-response items with particularly complex scoring guides, each scorer is given a pre-scored qualification set of student responses to score. Qualification standards for each item vary according to the number of score levels for the item. Individual scorer results are retained for all qualification sets.

Scoring supervisors and trainers are selected based upon many factors including their previous experience, educational and professional backgrounds, demonstration of a strong understanding of the scoring criteria, and strong interpersonal communication skills and organizational abilities.

NAEP scoring teams usually consist of 10-12 scorers who are led by a scoring supervisor and a trainer. Prior to the scoring effort, all personnel are intensively trained. The trainers who train the scorers, the supervisors who oversee a group of scorers, and the scorers themselves are all given both general scoring training and item-specific content training.

NAEP Scoring System

Using the latest technology and secure network communications, the NAEP electronic scoring system both transmits images of student responses to the trained scorers and receives back the scores assigned by them. Student responses from paper booklets are scanned from the original test booklets; the actual test booklets can be accessed and referenced if needed. Student responses from digitally based assessments are processed electronically and presented to scorers in the same system. The scorer sees each student response in isolation on a computer screen and assigns a score. The scorer cannot access any other responses from the student for a particular item or from other items the student answered. As each response is scored, another student response is shown for scoring, until all responses for an item have been scored.

During scoring, the NAEP electronic scoring system provides documentation of numerous scoring metrics. Reports on item and scoring performance can be retrieved as needed. In addition, custom reports of daily activities are sent out nightly to development, scoring, and analysis staff to monitor NAEP scoring quality and progress.

All assessments are scored item by item so that scorers train on one item and one scoring guide at a time. This method is efficient only with electronic presentation of student responses.

NAEP Scoring Procedures

During the scoring of a particular item, a percentage of scored responses is randomly recirculated by the system to be rescored by a second scorer in order to check the consistency of current-year scoring. The reliability sample sizes are about 1,000 (5% of 20,000) for large assessments (national/state) and about 500 (25% of 2,000) for smaller assessments (e.g., national-only).​ This comparison of first and second scores yields the within-year interrater agreement.

In addition, NAEP trend scoring is used to compare the consistency of scoring over time (i.e., cross-year interrater agreement). During trend scoring, the NAEP electronic scoring system allows for the presentation of a pool of scored responses from a prior assessment to current scorers. Comparing current scores to the scores given in the prior assessment offers the ability to generate reports to evaluate scoring consistency over time for a specific NAEP item.

Backreading of current year responses ensures frequent monitoring of scorer decision-making by supervisory staff. Backreading allows the supervisor to review responses (with scores assigned) already scored by each scorer and to ensure that each scorer is applying the scoring guide correctly. About 5 percent of each scorer's output is monitored through backreading.

During training and scoring, any changes to existing documentation are captured by scoring staff, shared across scoring teams, and documented in a record of the scoring history of the NAEP item. This is reviewed prior to the next scoring effort.


Last updated 17 September 2024 (SK)