4 Assessment Delivery

Chapter 4 of the Dynamic Learning Maps® (DLM®) Alternate Assessment System 2021–2022 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2022) describes general test administration and monitoring procedures. This chapter describes updated procedures and data collected in 2023–2024, including a summary of adaptive delivery, administration incidents, accessibility support selections, test administration observations, and test administrator survey responses regarding user experience and opportunity to learn.

Overall, intended administration features remained consistent with the 2022–2023 implementation, including the availability of instructionally embedded testlets, spring operational administration of testlets, the use of adaptive delivery during the spring window, and the availability of accessibility supports.

For a complete description of test administration for DLM assessments–including information on the Kite® Suite used to assign and deliver assessments, testlet formats, accessibility features, the First Contact Survey used to recommend testlet linkage level, available administration resources and materials, and information on monitoring assessment administration–see the 2021–2022 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2022).

4.1 Overview of Key Features of the Year-End Assessment Model

As briefly described in Chapter 1, the DLM assessment system has two available models. This manual describes the Year-End assessment model. Consistent with the DLM Theory of Action described in Chapter 1, the DLM assessment administration features reflect multidimensional, nonlinear, and diverse ways that students learn and demonstrate their learning. Test administration procedures therefore use multiple sources of information to assign testlets, including student characteristics and prior performance.

In the Year-End model, the DLM system is designed to assess student learning at the end of the year. All testlets are administered in the spring assessment window; however, optional instructionally embedded testlets are available throughout the fall and winter. The instructionally embedded assessments, if administered, do not contribute to summative scoring. This assessment model yields summative results based only on testlets completed during the spring assessment window.

With the exception of English language arts (ELA) writing testlets, each testlet contains items measuring one Essential Element (EE) and one linkage level. In reading and mathematics, items in a testlet are aligned to nodes at one of five linkage levels for a single EE. Writing testlets measure multiple EEs and are delivered at one of two levels: emergent (which corresponds with Initial Precursor and Distal Precursor linkage levels) or conventional (which corresponds with Proximal Precursor, Target, and Successor linkage levels).

For a complete description of key administration features, including information on assessment delivery, the Kite Suite, the Instruction and Assessment Planner, and linkage level assignment, see Chapter 4 of the 2021–2022 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2022). Additional information about changes in administration can also be found in the Test Administration Manual (Dynamic Learning Maps Consortium, 2024d) and the Educator Portal User Guide (Dynamic Learning Maps Consortium, 2024c).

4.1.1 Assessment Administration Windows

Assessments are administered in the spring assessment window for operational reporting. Optional assessments are available during the instructionally embedded assessment window for educators to administer for formative information.

4.1.1.1 Instructionally Embedded Assessment Window

During the instructionally embedded assessment window, testlets are optionally available for test administrators to assign to their students. When choosing to administer the optional testlets during the instructionally embedded assessment window, educators decide which EEs and linkage levels to assess for each student using the Instruction and Assessment Planner in Educator Portal. The assessment delivery system recommends a linkage level for each EE based on the educator’s responses to the student’s First Contact Survey, but educators can choose a different linkage level based on their own professional judgment. In 2023–2024, the instructionally embedded assessment window occurred between September 11, 2023, and February 23, 2024. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the instructionally embedded assessment window ranged from 15 to 24 weeks.

4.1.1.2 Spring Assessment Window

During the spring assessment window, students are assessed on all of the EEs on the assessment blueprint in ELA and mathematics. The linkage level for each EE is determined by the system. In 2023–2024, the spring assessment window occurred between March 11, 2024, and June 7, 2024. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the spring assessment window ranged from 5 to 13 weeks.

4.2 Evidence From the DLM System

This section describes evidence collected by the DLM system during the 2023–2024 operational administration of the DLM alternate assessment. The categories of evidence include adaptive delivery, administration incidents, and accessibility support selections.

4.2.1 Adaptive Delivery

The ELA and mathematics assessments are adaptive between testlets. In spring 2024, the same routing rules were applied as in prior years. That is, the linkage level associated with the next testlet a student received was based on the student’s performance on the most recently administered testlet, with the specific goal of maximizing the match of student knowledge and skill to the appropriate linkage level content.

  • The system adapted up one linkage level if the student responded correctly to at least 80% of the items measuring the previously tested EE. If the previous testlet was at the highest linkage level (i.e., Successor), the student remained at that level.
  • The system adapted down one linkage level if the student responded correctly to less than 35% of the items measuring the previously tested EE. If the previous testlet was at the lowest linkage level (i.e., Initial Precursor), the student remained at that level.
  • Testlets remained at the same linkage level if the student responded correctly to between 35% and 80% of the items on the previously tested EE.

The linkage level of the first testlet assigned to a student was based on First Contact Survey responses. See Chapter 4 of the 2021–2022 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2022) for more details. Table 4.1 shows the correspondence between the First Contact complexity bands and first assigned linkage levels.

Table 4.1: Correspondence of Complexity Bands and Linkage Levels
First Contact complexity band Linkage level
Foundational Initial Precursor
Band 1 Distal Precursor
Band 2 Proximal Precursor
Band 3 Target

Following the spring 2024 administration, analyses were conducted to determine the mean percentage of testlets that were adapted by the system from the first to second testlet administered for students within a grade, subject, and complexity band. Table 4.2 and Table 4.3 show the aggregated results for ELA and mathematics, respectively.

For the majority of students across all grades assigned to the Foundational Complexity Band by the First Contact Survey, the system did not adapt testlets to a higher linkage level after the first assigned testlet (ranging from 54% to 87% across both subjects). Consistent patterns were not as apparent for students who were assigned to Band 1, Band 2, or Band 3. Adaptation distributions across the three categories (adapted up, did not adapt, adapted down) were more variable across grades and subjects. Results indicate that linkage levels of students assigned to higher complexity bands are more variable with respect to the direction in which students move between the first and second testlets. However, this finding is consistent with prior years. Several factors may help explain these results, including more variability in student characteristics within this group of students assigned to higher complexity bands and content-based differences across grades and subjects. Further exploration is needed in this area.

Table 4.2: Adaptation of Linkage Levels Between First and Second English Language Arts Testlets (N = 89,120)
Foundational
Band 1
Band 2
Band 3
Grade Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%)
Grade 3 13.1 86.9 58.9 22.4 18.7 73.1 17.0   9.8 85.2 12.8   2.0
Grade 4 31.4 68.6 17.0 29.8 53.3 55.4 29.0 15.5 36.1 20.7 43.3
Grade 5 32.3 67.7 20.5 31.0 48.5 52.4 38.3   9.2 87.6   7.9   4.5
Grade 6 36.1 63.9 16.2 37.1 46.6 21.8 39.0 39.1 40.9 38.3 20.9
Grade 7 41.9 58.1 26.4 26.3 47.3 46.9 37.8 15.3 64.8 27.6   7.5
Grade 8 45.9 54.1 31.5 29.9 38.6 63.0 25.9 11.1 80.9 13.3   5.7
Grade 9 17.8 82.2 31.1 32.4 36.5 16.6 31.3 52.1 57.5 28.1 14.4
Grade 10 14.8 85.2 30.0 32.2 37.8 11.0 30.6 58.4 55.8 28.3 16.0
Grade 11 32.4 67.6 11.5 41.3 47.3 56.5 26.8 16.7 59.8 23.2 17.0
Grade 12 * * * * * 57.8 23.4 18.8 54.2 32.5 13.3
Note. Foundational is the lowest complexity band, so the system could not adapt testlets down a linkage level.
* These data were suppressed because n < 50.
Table 4.3: Adaptation of Linkage Levels Between First and Second Mathematics Testlets (N = 89,019)
Foundational
Band 1
Band 2
Band 3
Grade Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%)
Grade 3 13.5 86.5 30.9 49.5 19.5 22.7 53.1 24.2 73.3 15.3 11.3
Grade 4 16.2 83.8 19.1 34.2 46.7 66.1 25.9   8.0 71.8 21.0   7.3
Grade 5 20.0 80.0 13.2 31.9 54.9 40.9 26.3 32.7 72.2 15.8 12.0
Grade 6 22.4 77.6 17.4 42.8 39.8 32.0 34.5 33.5 48.8 44.3   7.0
Grade 7 20.6 79.4 14.6 29.3 56.1 20.3 20.3 59.4 73.3 17.7   9.1
Grade 8 22.2 77.8 17.3 48.6 34.1 31.3 53.0 15.7 49.0 22.2 28.8
Grade 9 25.7 74.3 24.7 50.1 25.2 54.4 37.6   8.0 59.4 32.9   7.7
Grade 10 34.4 65.6 33.2 28.3 38.4 34.5 19.9 45.6   5.0 11.0 84.0
Grade 11 32.5 67.5 27.9 42.4 29.8 28.0 43.3 28.7 14.1 17.5 68.4
Grade 12 23.1 76.9 39.3 38.1 22.6 33.3 34.7 32.0 * * *
Note. Foundational is the lowest complexity band, so the system could not adapt testlets down a linkage level.
* These data were suppressed because n < 50.

After the second testlet is administered, the system continues to adapt testlets based on the same routing rules. Table 4.4 shows the total number and percentage of testlets that were assigned at each linkage level during the spring assessment window. Because writing testlets are not assigned at a specific linkage level, those testlets are not included in Table 4.4. In ELA, testlets were fairly evenly distributed across the five linkage levels, with slightly fewer assignments at the Target linkage level. In mathematics, there were slightly more assignments at the Initial Precursor linkage level and fewer assignments at the Target and Successor levels.

Table 4.4: Distribution of Linkage Levels Assigned for Assessment
Linkage level n %
English language arts
Initial Precursor 186,938 26.6
Distal Precursor 141,703 20.1
Proximal Precursor 128,376 18.3
Target 102,528 14.6
Successor 143,775 20.4
Mathematics
Initial Precursor 232,821 35.5
Distal Precursor 165,996 25.3
Proximal Precursor 126,634 19.3
Target   71,538 10.9
Successor   58,297   8.9

4.2.2 Administration Incidents

DLM staff annually evaluate testlet assignment to promote correct assignment of testlets to students. Administration incidents that have the potential to affect scoring are reported to state education agencies in a supplemental Incident File. No incidents were observed during the 2023–2024 operational assessment windows. Assignment of testlets will continue to be monitored in subsequent years to track any potential incidents and report them to state education agencies.

4.2.3 Accessibility Support Selections

Accessibility supports provided in 2023–2024 were the same as those available in previous years. The DLM Accessibility Manual (Dynamic Learning Maps Consortium, 2024b) distinguishes accessibility supports that are provided in Kite Student Portal via the Personal Needs and Preferences Profile, those that require additional tools or materials, and those that are provided by the test administrator outside the system. Table 4.5 shows selection rates for the three categories of accessibility supports. Multiple supports can be selected for each student. Overall, 89,136 students enrolled in the DLM system (93%) had at least one support selected. The most selected supports in 2023–2024 were human read aloud, test administrator enters responses for student, and spoken audio. For a complete description of the available accessibility supports, see Chapter 4 of the 2021–2022 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2022).

Table 4.5: Accessibility Supports Selected for Students (N = 89,136)
Support n %
Supports provided in Kite Student Portal
Spoken audio 57,308 59.6
Magnification 13,787 14.3
Color contrast   8,394   8.7
Overlay color   3,119   3.2
Invert color choice   2,145   2.2
Supports requiring additional tools/materials
Individualized manipulatives 44,660 46.5
Calculator 28,572 29.7
Single-switch system   3,720   3.9
Alternate form–visual impairment   2,090   2.2
Two-switch system   1,151   1.2
Uncontracted braille       93   0.1
Supports provided outside the system
Human read aloud 79,756 83.0
Test administrator enters responses for student 58,367 60.7
Partner-assisted scanning   9,298   9.7
Language translation of text   1,757   1.8
Sign interpretation of text   1,205   1.3

4.3 Evidence From Monitoring Assessment Administration

DLM staff monitor assessment administration using various materials and strategies. As in prior years, DLM staff made available an assessment administration observation protocol for use by DLM staff, state education agency staff, and local education agency staff. DLM staff also reviewed Service Desk requests and hosted regular check-in calls with state education agency staff to monitor common issues and concerns during the assessment window. This section provides an overview of the assessment administration observation protocol and its use.

4.3.1 Test Administration Observations

Consistent with previous years, the DLM Consortium used a test administration observation protocol to gather information about how educators in the consortium states deliver testlets to students with the most significant cognitive disabilities. This protocol gave observers, regardless of their role or experience with DLM assessments, a standardized way to describe how DLM testlets were administered. The test administration observation protocol captured data about student actions (e.g., navigation, responding), educator assistance, variations from standard administration, student engagement, and barriers to engagement. For a full description of the test administration observation protocol, see Chapter 4 of the 2021–2022 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2022).

During 2023–2024, there were 270 assessment administration observations collected in seven states. Table 4.6 shows the number of observations collected by state. Of the 270 total observations, 195 (72%) were of computer-delivered assessments and 75 (28%) were of educator-administered testlets. The observations were for 150 (56%) ELA reading testlets, 13 (5%) ELA writing testlets, and 106 (39%) mathematics testlets.

Table 4.6: Educator Observations by State (N = 270)
State n %
Arkansas 60 22.2
Colorado   4   1.5
Iowa 24   8.9
Kansas 43 15.9
Missouri 25   9.3
New York 17   6.3
West Virginia 97 35.9

Table 4.7 summarizes observations for computer-delivered testlets; behaviors on the test administration observation protocol were identified as supporting, neutral, or nonsupporting. For example, clarifying directions (found in 43% of observations) removes student confusion about the task demands as a source of construct-irrelevant variance and supports the student’s meaningful, construct-related engagement with the item. In contrast, using physical prompts (e.g., hand-over-hand guidance) indicates that the test administrator directly influenced the student’s answer choice. Overall, 60% of observed behaviors were classified as supporting, with 1% of observed behaviors reflecting nonsupporting actions.

Table 4.7: Test Administrator Actions During Computer-Delivered Testlets (n = 195)
Action n %
Supporting
Read one or more screens aloud to the student 125 64.1
Navigated one or more screens for the student   99 50.8
Clarified directions or expectations for the student   84 43.1
Repeated question(s) before student responded   44 22.6
Neutral
Used verbal prompts to direct the student’s attention or engagement (e.g., “look at this.”)   63 32.3
Used pointing or gestures to direct student attention or engagement   60 30.8
Entered one or more responses for the student   42 21.5
Used materials or manipulatives during the administration process   23 11.8
Allowed student to take a break during the testlet   16   8.2
Asked the student to clarify or confirm one or more responses   15   7.7
Repeated question(s) after student responded (gave a second trial at the same item)   11   5.6
Nonsupporting
Physically guided the student to a response     4   2.1
Reduced the number of answer choices available to the student     0   0.0
Note. Respondents could select multiple responses to this question.

For DLM assessments, interaction with the system includes interaction with the assessment content as well as physical access to the testing device and platform. The fact that educators navigated one or more screens in 51% of the observations does not necessarily indicate the student was prevented from engaging with the assessment content as independently as possible. Depending on the student, test administrator navigation may either support or minimize students’ independent, physical interaction with the assessment system. While not the same as interfering with students’ interaction with the content of the assessment, navigating for students who are able to do so independently conflicts with the assumption that students are able to interact with the system as intended. The observation protocol did not capture why the test administrator chose to navigate, and the reason was not always obvious.

Observations of student actions taken during computer-delivered testlets are summarized in Table 4.8. Independent response selection was observed in 58% of the cases. Nonindependent response selection may include allowable practices, such as test administrators entering responses for the student. The use of materials outside of Kite Student Portal was seen in 6% of the observations. Verbal prompts for navigation and response selection are strategies within the realm of allowable flexibility during test administration. These strategies, which are commonly used during direct instruction for students with the most significant cognitive disabilities, are used to maximize student engagement with the system and promote the type of student-item interaction needed for a construct-relevant response. However, they also indicate that students were not able to sustain independent interaction with the system throughout the entire testlet.

Table 4.8: Student Actions During Computer-Delivered Testlets (n = 195)
Action n %
Selected answers independently 114 58.5
Navigated screens independently   90 46.2
Selected answers after verbal prompts   57 29.2
Navigated screens after verbal prompts   51 26.2
Navigated screens after test administrator pointed or gestured   29 14.9
Asked the test administrator a question   14   7.2
Used materials outside of Kite Student Portal to indicate responses to testlet items   11   5.6
Revisited one or more questions after verbal prompt(s)     7   3.6
Skipped one or more items     4   2.1
Independently revisited a question after answering it     2   1.0
Note. Respondents could select multiple responses to this question.

Observers noted whether there was difficulty with accessibility supports (including lack of appropriate available supports) during observations of educator-administered testlets. Of the 75 observations of educator-administered testlets, observers noted difficulty in four cases (5%). For computer-delivered testlets, observers noted students who indicated responses to items using varied response modes such as gesturing (22%) and using manipulatives or materials outside of the Kite system (6%). Of the 270 test administration observations collected, students completed the full testlet in 194 cases (72%). In all instances where the testlet was not completed, no reason was provided by the observer.

Finally, DLM assessment administration observation intends for test administrators to enter student responses with fidelity, including across multiple modes of communication, such as verbal, gesture, and eye gaze. Table 4.9 summarizes students’ response modes for educator-administered testlets. The most frequently observed behavior was gestured to indicate response to test administrator who selected answers.

Table 4.9: Primary Response Mode for Educator-Administered Testlets (n = 75)
Response mode n %
Gestured to indicate response to test administrator who selected answers 45 60.0
Verbally indicated response to test administrator who selected answers 24 32.0
No observable response mode 11 14.7
Eye gaze system indication to test administrator who selected answers   3   4.0
Note. Respondents could select multiple responses to this question.

Observations of computer-delivered testlets when test administrators entered responses on behalf of students provided another opportunity to confirm fidelity of response entry. This support is recorded on the Personal Needs and Preferences Profile and is recommended for a variety of situations (e.g., students who have limited motor skills and cannot interact directly with the testing device even though they can cognitively interact with the onscreen content). Observers recorded whether the response entered by the test administrator matched the student’s response. In 42 of 195 (22%) observations of computer-delivered testlets, the test administrator entered responses on the student’s behalf. In 41 (98%) of those cases, observers indicated that the entered response matched the student’s response, while the remaining observer responded that they could not tell if the entered response matched the student’s response.

4.4 Evidence From Test Administrators

This section describes evidence collected from the spring 2024 test administrator survey. Test administrators receive one survey per rostered DLM student, which annually collects information about that student’s assessment experience. As in previous years, the survey was distributed to test administrators in Kite Student Portal, where students completed assessments. Instructions indicated the test administrator should complete the survey after administration of the spring assessment; however, users can complete the survey at any time. The survey consisted of three blocks. Blocks 1 and 3 were administered in every survey. Block 1 included questions about the test administrator’s perceptions of the assessments and the student’s interaction with the content. Block 3 included questions about the test administrator’s background, to be completed once per administrator. Block 2 was spiraled, so test administrators received one randomly assigned section. In these sections, test administrators responded to questions about a single topic (e.g., relationship of the assessment to ELA, mathematics, or science instruction).

4.4.1 User Experience With the DLM System

A total of 15,766 test administrators (65%) responded to the survey about 51,127 students’ experiences. Test administrators are instructed to respond to the survey separately for each of their students. Participating test administrators responded to surveys for between 1 and 60 students, with a median of 2 students. Test administrators most commonly reported having 11 to 20 years of experience in ELA, 11 to 20 years in mathematics, and 2 to 5 years teaching students with significant cognitive disabilities. Most of the survey respondents (72%) were the student’s primary teacher in the subject assessed, while other respondents included case managers (14%), other teachers (9%), and others (6%).

The following sections summarize responses regarding both educator and student experiences with the DLM system.

4.4.1.1 Educator Experience

Test administrators were asked to reflect on their own experience with the assessments as well as their comfort level and knowledge administering them. Most of the questions required test administrators to respond on a 4-point scale: strongly disagree, disagree, agree, or strongly agree. Table 4.10 summarizes responses.

Nearly all test administrators (96%) agreed or strongly agreed that they were confident administering DLM testlets. Most respondents (92%) agreed or strongly agreed that Required Test Administrator Training prepared them for their responsibilities as test administrators. Most test administrators agreed or strongly agreed that they had access to curriculum aligned with the content that was measured by the assessments (88%) and that they used the manuals and the Educator Resource Page (90%).

Table 4.10: Test Administrator Responses Regarding Test Administration
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
I was confident in my ability to deliver DLM testlets. 235 1.6 338 2.3 6,200 41.4 8,211 54.8 14,411 96.2
Required Test Administrator Training prepared me for the responsibilities of a test administrator. 324 2.2 832 5.6 7,098 47.4 6,719 44.9 13,817 92.3
I have access to curriculum aligned with the content measured by DLM assessments. 463 3.1 1,337 8.9 7,423 49.6 5,748 38.4 13,171 88.0
I used manuals and/or the DLM Educator Resource Page materials. 401 2.7 1,137 7.6 7,810 52.1 5,648 37.7 13,458 89.8
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

4.4.1.2 Student Experience

The spring 2024 test administrator survey included three items about how students responded to test items. Test administrators were asked to rate statements from strongly disagree to strongly agree. Table 4.11 presents the results. For the majority of students, test administrators agreed or strongly agreed that their students responded to items to the best of their knowledge, skills, and understandings; were able to respond regardless of disability, behavior, or health concerns; and had access to all necessary supports to participate.

Table 4.11: Test Administrator Perceptions of Student Experience with Testlets
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
Student responded to items to the best of their knowledge, skills, and understanding. 1,877 4.0 3,643 7.7 24,813 52.3 17,117 36.1 41,930 88.4
Student was able to respond regardless of their disability, behavior, or health concerns. 2,906 6.1 4,353 9.1 23,928 50.3 16,394 34.5 40,322 84.8
Student had access to all necessary supports to participate. 1,651 3.5 2,332 4.9 24,595 51.9 18,787 39.7 43,382 91.6
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

Annual survey results show that a small percentage of test administrators disagree that their student was able to respond regardless of disability, behavior, or health concerns; had access to all necessary supports; and was able to effectively use supports.

4.4.2 Opportunity to Learn

The spring 2024 test administrator survey also included items about students’ opportunity to learn. Table 4.12 reports the opportunity to learn results.

Approximately 72% of responses (n = 34,084) reported that most or all ELA testlets matched instruction, compared to 64% (n = 30,457) for mathematics.

Table 4.12: Educator Ratings of Portion of Testlets That Matched Instruction
None
Some (<half)
Most (>half)
All
Not applicable
Subject n % n % n % n % n %
English language arts 2,474 5.2 10,345 21.7 18,887 39.6 15,197 31.9 775 1.6
Mathematics 2,854 6.0 13,103 27.7 18,042 38.1 12,415 26.2 907 1.9

A subset of test administrators was asked to indicate the approximate number of hours in total spent instructing students on each of the conceptual areas by subject (i.e., ELA, mathematics) during the 2023–2024 year. Test administrators responded using a 6-point scale: 0 hours, 1–5 hours, 6–10 hours, 11–15 hours, 16–20 hours, or more than 20 hours. Table 4.13 and Table 4.14 indicate the amount of instructional time spent on conceptual areas for ELA and mathematics, respectively. On average, 43% of the test administrators provided at least 11 hours of instruction per conceptual area to their students in ELA, compared to 40% in mathematics.

Table 4.13: Instructional Time Spent on English Language Arts Conceptual Areas
Number of hours
0
1–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n % n %
Determine critical elements of text 11–15    524 10.2 1,138 22.2 859 16.8 616 12.0 674 13.2 1,314 25.6
Construct understandings of text 6–10    803 15.7 1,155 22.7 887 17.4 635 12.5 660 12.9    959 18.8
Integrate ideas and information from text 6–10    937 18.4 1,240 24.4 877 17.3 637 12.5 655 12.9    737 14.5
Use writing to communicate 6–10    854 16.8 1,155 22.7 874 17.2 608 12.0 619 12.2    968 19.1
Integrate ideas and information in writing 6–10 1,125 22.2 1,164 23.0 819 16.2 608 12.0 595 11.8    752 14.9
Use language to communicate with others 11–15    492   9.7 1,014 20.0 847 16.7 685 13.5 728 14.4 1,306 25.7
Clarify and contribute in discussion 6–10    745 14.7 1,105 21.7 847 16.7 667 13.1 681 13.4 1,037 20.4
Use sources and information 1–5 1,451 28.6 1,181 23.2 775 15.3 569 11.2 512 10.1    593 11.7
Collaborate and present ideas 1–5 1,367 26.8 1,217 23.8 800 15.7 557 10.9 515 10.1    648 12.7
Table 4.14: Instructional Time Spent on Mathematics Conceptual Areas
Number of hours
0
1–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n % n %
Understand number structures (counting, place value, fraction) 11–15    784   7.6 2,017 19.6 1,586 15.5 1,288 12.5 1,461 14.2 3,129 30.5
Compare, compose, and decompose numbers and steps 6–10 1,769 17.3 2,301 22.6 1,736 17.0 1,322 13.0 1,356 13.3 1,712 16.8
Calculate accurately and efficiently using simple arithmetic operations 6–10 1,561 15.3 1,963 19.3 1,602 15.7 1,311 12.9 1,338 13.2 2,397 23.6
Understand and use geometric properties of two- and three-dimensional shapes 6–10 2,101 20.6 2,699 26.5 1,898 18.6 1,345 13.2 1,141 11.2 1,000   9.8
Solve problems involving area, perimeter, and volume 1–5 3,642 35.9 2,371 23.4 1,459 14.4 1,038 10.2    848   8.4    773   7.6
Understand and use measurement principles and units of measure 1–5 2,367 23.4 2,839 28.1 1,819 18.0 1,226 12.1    961   9.5    908   9.0
Represent and interpret data displays 6–10 2,342 23.1 2,636 26.0 1,826 18.0 1,250 12.3 1,047 10.3 1,031 10.2
Use operations and models to solve problems 6–10 1,823 17.9 2,166 21.3 1,709 16.8 1,355 13.3 1,363 13.4 1,742 17.1
Understand patterns and functional thinking 6–10 1,397 13.7 2,709 26.6 2,029 19.9 1,464 14.4 1,312 12.9 1,287 12.6

Another dimension of opportunity to learn is student engagement during instruction. The First Contact Survey contains two questions that ask educators to rate student engagement during computer- and educator-directed instruction. Table 4.15 shows the percentage of students who were rated as demonstrating different levels of attention by instruction type. Overall, 87% of students demonstrate fleeting or sustained attention to computer-directed instruction and 85% of students demonstrate fleeting or sustained attention to educator-directed instruction.

Table 4.15: Student Attention Levels During Instruction
Demonstrates
little or no attention
Demonstrates
fleeting attention
Generally
sustains attention
Type of instruction n % n % n %
Computer-directed (n = 90,888) 11,560 12.7 45,802 50.4 33,526 36.9
Educator-directed (n = 93,739) 13,983 14.9 55,019 58.7 24,737 26.4

The 2024 teacher survey included new questions asking teachers to indicate the type of instructional activity/task and level of assistance they provided to students during reading, writing, and mathematics instruction. These questions were designed to gather evidence of performance expectations and student engagement, two dimensions of opportunity to learn. Teachers were asked to consider how students demonstrated thinking and learning in the subject area and to indicate their expectations for the student. For each instructional activity, teachers indicated the level of assistance they provided to the student: independent (no assistance), verbal, gestural, modeling, or physical assistance.

Table 4.16, Table 4.17, and Table 4.18 show the frequency of each combination of performance expectations and level of assistance in ELA (reading), ELA (writing), and mathematics. In all subjects, there is wide variability in teachers’ expectations for students, suggesting that some students had opportunities to engage in many different types of tasks at different levels of complexity, while others did not. Teachers’ levels of assistance were also widely distributed, with verbal assistance most common across all task/activity expectations.

Table 4.16: Teacher Expectations for Students and Level of Assistance Required in English Language Arts (Reading)
Level of expectation (%)
Expectation Total n Not an expectation With physical assistance With modeling assistance With gestural assistance With verbal assistance Independent
Pay attention to the lesson 5,125   1.8 12.8 15.0 11.6 42.0 16.8
Explore materials to be used in the lesson 5,110   2.0 14.4 20.5   9.7 33.6 19.8
Repeat or copy something the teacher or someone else has done 5,114   5.8 13.4 20.6   7.1 30.6 22.4
Demonstrate knowledge of a fact 5,108   9.2 10.7 19.7   6.9 38.0 15.4
Demonstrate knowledge of a concept 5,110   8.5 11.0 21.2   6.8 39.6 12.8
Complete a simple task in response to instruction 5,104   2.2 13.3 18.2   8.2 34.5 23.7
Follow a routine or multi-step activity 5,109   3.7 14.0 18.8   8.2 38.2 17.1
Evaluate something they learned 4,078   <0.1   11.5 26.7   7.6 44.0 10.2
Summarize what they learned 5,123 22.3   8.3 21.1   4.4 35.5   8.2
Table 4.17: Teacher Expectations for Students and Level of Assistance Required in English Language Arts (Writing)
Level of expectation (%)
Expectation Total n Not an expectation With physical assistance With modeling assistance With gestural assistance With verbal assistance Independent
Pay attention to the lesson 5,208   2.5 13.6 16.1 11.6 40.1 16.2
Explore materials to be used in the lesson 4,101   3.1 18.3   <0.1   13.2 39.7 25.7
Repeat or copy something the teacher or someone else has done 5,209   5.7 13.7 20.6   7.9 29.8 22.2
Demonstrate knowledge of a fact 5,211 11.0 10.0 21.6   6.9 36.7 13.8
Demonstrate knowledge of a concept 5,212   9.9 10.0 23.2   7.1 38.0 11.9
Complete a simple task in response to instruction 5,209   2.7 12.8 18.8   9.6 34.3 21.8
Follow a routine or multi-step activity 5,207   3.5 13.3 19.4   9.1 39.1 15.7
Evaluate something they learned 5,209 20.7   9.2 22.0   6.4 34.3   7.3
Summarize what they learned 5,225 23.8   8.3 21.2   4.4 35.2   7.0
Table 4.18: Teacher Expectations for Students and Level of Assistance Required in Mathematics
Level of expectation (%)
Expectation Total n Not an expectation With physical assistance With modeling assistance With gestural assistance With verbal assistance Independent
Pay attention to the lesson 10,231   1.6 12.8 16.2 11.3 41.4 16.8
Explore materials to be used in the lesson 10,220   1.9 13.3 21.7 10.2 30.4 22.4
Repeat or copy something the teacher or someone else has done 10,214   5.0 13.3 20.6   8.3 30.5 22.3
Demonstrate knowledge of a fact 10,205   7.9 10.7 21.5   7.8 36.4 15.7
Demonstrate knowledge of a concept   8,027 10.1   <0.1   16.6 10.1 46.0 17.1
Complete a simple task in response to instruction 10,196   2.2 12.8 18.4   9.3 33.7 23.6
Follow a routine or multi-step activity 10,206   3.7 13.1 19.4   9.6 38.3 15.9
Evaluate something they learned 10,195 19.3   9.3 21.7   6.4 34.4   8.9
Summarize what they learned 10,219 23.2   8.1 20.2   5.6 35.1   7.8

4.5 Conclusion

Delivery of DLM assessments was designed to align with instructional practice and be responsive to individual student needs. Assessment delivery options allow for flexibility to reflect student needs while also including constraints to maximize comparability and support valid interpretation of results. The flexible nature of DLM assessment administration is reflected in adaptive delivery between testlets. Evidence collected from the DLM system, test administration monitoring, and test administrator survey indicates that test administrators are prepared and confident administering DLM assessments and that students are able to successfully interact with the system to demonstrate their knowledge, skills, and understandings.