Data-Driven Common Core Does Not Compute


More and more it feels like the education leaders who are tasked with overseeing the data-driven Common Core implementation and assessment policies are “Lost in Space” and their data “does not compute”.

During the school year 100% of teachers are expected to “unpack” the standards, differentiate instruction, use collaborative protocols, individualized “scaffolding”, while providing extra time and “space” to support diverse learners possessing a wide range of abilities and disabilities.

However, at the end of the year, virtually 100% of the students and teachers will be evaluated by a timed standardized test that measures a small fraction of the standards and many special education students will take all or part of the test with 0% of their accommodations..

“Out of the 83 combinations of Common Core Standards (ELA 3rd), NYS chose to test only 15% of them. This leaves teachers, administrators, parents, students and colleges/careers wondering if these 15% can truly be the measure of college/career readiness…”

Lace To The Top: “NYS Common Core Test Fails Itself” 11/11/13

Many states are planning to use PARCC Assessments to measure and predict the college and career readiness of students even though the creators of the test have acknowledged that it will assess 0% of essential college and career readiness soft skills…

“A comprehensive determination of college and career readiness that would include additional factors such as these [persistence, motivation, and time management] is beyond the scope of the PARCC assessments in ELA/literacy and mathematics…”

(pgs 2-3) College- and Career-Ready Determination Policy and Policy-Level PLDs ( Adopted October 2012; Updated March 2013 ) (PDF)

Many Common Core supporters also claim that student scores on these tests can accurately and reliably measure as much as 50% of a teacher’s effectiveness, even though there is 0% evidence to support these claims.

In fact, The American Statistical Association (ASA) recently released an ASA Statement on Using Value-Added Models for Educational Assessment which reported…

“Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality. (2)…

A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. (6)…

The majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences. (7)…”

Robert D. Skeels, “American Statistical Association has just released a very important document on Value Added Methodologies” 4/9/14

In order to prove their effectiveness and earn “points” towards the remaining  50 – 60% of their evaluation, teachers are required to provide evidence of research based strategies and practices they use in the classroom to implement the Common Core Standards while there is 0% research and evidence proving effectiveness of the standards.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s