PIAAC was administered in person by trained interviewers who used an official study laptop. The interviewer read out the study’s background questionnaire (BQ), which in the United States was offered in English or Spanish, and entered the participant’s responses into the secure laptop. Once the BQ was completed, the interviewer then gave the laptop to the participant to take the study’s direct assessments. The study assessed adults’ skills in literacy, numeracy, and/or digital problem solving (also called “problem solving in technology-rich environments” (PS-TRE)). If a participant could not use the laptop to take the computer-based assessment (CBA), then a paper-and-pencil assessment (PBA) was offered by the interviewer.
PIAAC was one of the first computer-based assessments to incorporate an adaptive design. Assessments with an adaptive design can measure participants’ abilities with fewer test items than a traditional test and can do so more accurately. Adaptive testing means that participants are directed to a set of easier or more difficult test questions based on their answers to the information communication technology (ICT) core and literacy/numeracy core questions (which were automatically scored as correct or incorrect as the respondent completed them). PIAAC’s digital problem-solving domain had no adaptive design.
Each participant’s assignment of easier or more difficult assessment items was based on an algorithm that used a set of variables, including the participant’s: (1) level of education, (2) status as a native or non-native language speaker; and (3) performance in the CBA Core (as well as their performance in the CBA module as they advanced through the assessment).
More detail on the sample design can be found in the U.S. PIAAC 2012/14/17 Technical Report and on the Frequently Asked Questions page.
The assessment began with the Background Questionnaire (BQ), which was adaptive so that the questions given to the interviewer to read to the participant were determined based on the answers to previous questions. Because the BQ was automated in this way, participants only received questions relevant to their experience, education, and work history. For example, participants who said they were retired and not working were not asked questions related to their “current employment.”
The BQ included questions about the participant’s computer experience. The answers to these questions were used to route the participant to either the paper- or computer-based assessment when the BQ interview was completed. Participants with no computer experience were routed to the paper-based assessment, as were participants who declined to take the assessment on the laptop. The remainder were routed to the computer-based assessment (see figure A).
Paper-based assessment (PBA): This assessment began with a core of literacy/numeracy items in paper-and-pencil format that took about 10 minutes to complete. Participants who performed at or above a minimum standard on the core were randomly assigned to either a cluster of literacy items or a cluster of numeracy items that took approximately 30 minutes to complete. After completing those, they received an assessment of reading component skills, which took approximately 20 minutes to complete. Participants who performed poorly on the paper literacy-numeracy core proceeded directly to the reading components booklet (see figure A).
Computer-based assessment (CBA): Participants who indicated previous experience with computers in the BQ interview were directed to a core “screener” section composed of two parts: an information communication technology (ICT) core, which measured basic computer skills such as highlighting text on a screen with the cursor; and a literacy/numeracy core, which measured basic literacy and numeracy skills in an electronic format. Each core section took approximately 5 minutes to complete. Participants who failed the ICT core received the paper-based assessment and took the paper-based literacy-numeracy core items. Participants who passed the ICT core proceeded to the computer-based literacy-numeracy core. However, if they did not pass the computer-based literacy-numeracy core, they were routed directly to the reading components section of the paper-based assessment.
Participants who performed well on both parts of the computer-based core section were randomly routed to either the computer-based literacy, computer-based numeracy, or digital problem-solving domains. The computer-based assessment (CBA) consisted of Module 1 and Module 2. Each module was a set of literacy, numeracy, or problem-solving units. Respondents who received literacy or numeracy in CBA Module 1 did not repeat the same domain but instead received one of the other two modules in CBA Module 2. Respondents who received digital problem solving in CBA Module 1 had a 50 percent chance of receiving a second set of problem-solving items again and a 50 percent chance of receiving literacy or numeracy items in CBA Module 2.
The diagram below is a simplified version of the workflow of the assessment with the paper-based assessment branching to the right and the computer-based assessment branching to the left. Note that within the computer-based assessment an adaptive design was used for the literacy and numeracy items in Modules 1 and 2.
Figure A. PIAAC Instruments Simplified Workflow