{"id":8654,"date":"2024-03-27T13:14:14","date_gmt":"2024-03-27T18:14:14","guid":{"rendered":"https:\/\/www.academicapproach.com\/?p=8654"},"modified":"2024-03-27T13:14:30","modified_gmt":"2024-03-27T18:14:30","slug":"the-bumpy-rollout-of-the-new-digital-sat-format","status":"publish","type":"post","link":"https:\/\/www.academicapproach.com\/the-bumpy-rollout-of-the-new-digital-sat-format\/","title":{"rendered":"The Bumpy Rollout of the New Digital SAT Format"},"content":{"rendered":"\n
Immediately following the March 9th<\/sup> SAT\u2014the very first administration of the adaptive, digital SAT in the United States\u2014students took to social media to make their feelings about the new test known. The main takeaway? The test was harder than a lot of students were led to believe from the College Board practice tests. Particularly, for students who received the harder second modules.<\/p>\n\n\n\n The adaptive nature of the digital SAT means that for each section of the test (Reading and Writing; Math) a student\u2019s performance on the first module determines the difficulty of the second module, which will either be easier or harder than the first module. For March 9th<\/sup> testers who were directed to the harder second modules, that increase in difficulty was, reportedly, exponential.<\/p>\n\n\n\n Students relayed that their second modules featured math concepts they didn\u2019t believe they would be tested on and long Reading passages on complicated Science topics that caused them to run out of time. While we were concerned to hear these reports from our students and the thousands of students posting on social media, 23+ years in this business has taught us a thing or two about reserving judgement about a particular test until 1) scores are released and 2) we get a copy of the test to review ourselves.<\/p>\n\n\n\n Often, challenging test forms are given more forgiving scales, meaning that the test creators have taken into account the difficulty of the questions and adjusted the scale so that you can miss more questions without it impacting your score as much. This is part of why we don\u2019t jump to conclusions when we hear a particular test is especially easy or hard. A \u201csuper easy\u201d test might have a brutal scale in which missing a single question drops your score considerably. In other words, the test experience for the student is not necessarily indicative of what their final scaled score will be.<\/p>\n\n\n\n The reason you have probably seen a lot of media about the March 9th<\/sup> SAT after scores were released last Friday, March 22nd<\/sup> is that scores did not match the scaling expectations, especially for students who were given the harder second modules. Not only were students caught off guard by particularly challenging material in the harder second modules, but also their scores weren\u2019t adjusted for that fact.<\/p>\n\n\n\n The reason this is especially frustrating is that it seems none of College Board\u2019s official practice tests (at the time of the March 9th<\/sup> test there were only four practice tests available, though College Board has since released two additional tests) adequately reflected either the difficulty of content featured on the March 9th<\/sup> test harder second module or provided an accurate model of what to expect from the scaled scores. If a student was preparing for the March 9th<\/sup> test and scoring in the 1500s on the four official practice tests, it is not just upsetting, but frankly, maddening, that that student may have earned a score in the 1300s on the March 9th<\/sup> test.<\/p>\n\n\n\nThe Issue: Harder Questions Without Adjusted Scaling<\/h3>\n\n\n\n
Lack of Representative Practice Tests and Detailed Score Reports<\/h3>\n\n\n\n