Friday, April 27, 2012

How Connecticut Diffused the Parent Trigger

Trend Three: Pulling the (Parent) Trigger

Note: Andrew Kelly, a research fellow at AEI, is guest-posting this week.
Unless you've been under a rock for the past year, you've heard about the "parent trigger." In principle, the trigger is a simple and powerful idea: parents in a chronically failing school can band together and petition the district to make radical changes. If the petitioners can get signatures from 51 percent of the parents, the district must respond with dramatic reforms. In California, site of the first-ever trigger law, the menu of options mirrors the four federal turnaround models, including the option to convert to a charter school.
In practice, the trigger is off to a rocky start. The first two attempts--first at McKinley Elementary in Compton, CA and more recently at Desert Trails Elementary in Adelanto, CA--brought out the best that education politics has to offer: charges and counter-charges of skullduggery, intimidation, and outright fraud. At McKinley, after parents successfully petitioned to convert to a charter and bring in a CMO, the petition was thrown out on a technicality. At Desert Trails, the local parent union, cultivated and coached by Parent Revolution, successfully obtained far more parent signatures than they needed, only to have opponents organize a rescission drive to stop the process. Though Parent Revolution uncovered evidence that some of the rescissions had been fraudulently altered, the school board rejected the petition. Around the same time that the Desert Trails plot was thickening, a parent trigger bill in Florida failed on a 20-20 vote in the state senate.
Despite these stumbles, the parent trigger is an idea that won't be going away any time soon. Reformers like it, and it has found strong legislative champions in state houses across the country. For charter school operators, the emergence of the trigger may represent an opportunity to expand. (The unions certainly think so--and they aren't pleased.)
But charter school operators should tread carefully when it comes to the parent trigger. The same thing that makes the parent trigger great--direct, majority rule democracy--could also make life tough for enterprising charter leaders who take over triggered schools.
In the March issue of Phi Delta Kappan, I argued that if policymakers are not careful, parent trigger laws may lead to some of the same instability and policy churn that plagues failing schools today. One of the weaknesses in the California law is that it does nothing to encourage stability after a successful trigger. From PDK:
Just because most parents preferred one model to the status quo the first time around doesn't mean they'll stick with it the next year. Barring immediate and palpable improvement, the same frustrations that gave rise to the first petition may resurface to threaten the triggered reform plan.
Evidence suggests that immediate improvement will be difficult to come by. In his study of California schools, Tom Loveless from Brookings found that only 1.4% of the schools that scored in the bottom quartile in 1989 had moved to the top quartile by 2009; 63% scored in the bottom quartile again. In WestEd's study of comprehensive school reform, just 12 of 262 low-performing schools were able to make sizable gains in math and reading over a three-year period (see also Robin Lake's post last week on SIG results in Washington State). Bottom line: turning around triggered schools will be tough work that likely requires a long time horizon. This is not to suggest that the country's finest CMO's can't turn around triggered schools. These guys make a living by beating long odds. Rather, the question is whether the parent trigger laws provide the new management with enough insulation and time to implement changes that are likely to drive long-term improvement. Opponents will be on the lookout for any signs of sluggish improvement, including those parents who did not sign the original petition and are unhappy with the choice.
Unless appropriate safeguards are built into the trigger process--a mandatory post-trigger implementation period, a supermajority requirement, or a fresh clock on school improvement status--charter organizations may find these opportunities less appealing than they appear. For their part, policymakers could entice more charter operators to take on triggered turnarounds by building these safeguards into the laws.
Of course, for all of the hype around parent trigger, we have yet to see any community pull it off, so all of this hypothesizing is moot. But, like I said on Monday, keeping our eye out for the next trend is an important part of pushing education reform forward. It's also a lot of fun to write about, so thanks for reading this week!
--Andrew Kelly

Thursday, April 26, 2012

Parents: Are you paying attention to SB24? Are you a parent who lives in the urban cities? (Hartford, New Haven, Bridgeport)? If so, pay attention. This legislation affects the services your children will get. Inexperienced staff without proper training will be in your school districts....Any ideas?

This is the video from the teachers rally at the Capitol building in CT

video video

Survey on Teacher Work

All educators must take this survey!
Todney

https://kent.qualtrics.com/SE/?SID=SV_7OjwAHmWx5Bskzq

Monday, April 23, 2012

Becoming A 21st Century Learner


Posted by Esther Quintero on April 16, 2012

Think about something you have always wanted to learn or accomplish but never did, such as a speaking a foreign language or learning how to play an instrument. Now think about what stopped you. There’s probably a variety of factors but chances are those factors have little to do with technology.

Electronic devices are becoming cheaper, easier to use, and more intuitive. Much of the world’s knowledge is literally at our fingertips, accessible from any networked gadget. Yet, sustained learning does not always follow. It is often noted that developing digital skills/literacy is fundamental to 21st century learning but, is that all that’s missing? I suspect not. In this post I take a look at university courses available to anyone with an internet connection (a.k.a. massive open on-line courses or MOOCs) and ask: What attributes or skills make some people (but not others) better equipped to take advantage of this and similar educational opportunities brought about by advances in technology?

In the last few months, Stanford University’s version of MOOCs have attracted considerable attention (also here and here), leading some to question the U.S. higher education model as we know it – and even envision its demise. But, what is really novel about the Stanford MOOCs? Why did 160,000 students from 190 countries sign up for the course “Introduction to Artificial Intelligence”?

According to Kevin Carey at The New Republic:

The availability of free Internet courses itself wasn’t all that innovative—MIT’s Open Courseware initiative is a decade old and elite schools like Yale and Carnegie Mellon have followed suit. The news was that the Stanford professors were letting students in their global classroom sit for the midterm, at proctored sites around the world. Those who did well on the A.I. test and a later final exam got a letter saying so, signed by the professors, a pair of well-known roboticists from Silicon Valley.

The 23,000 students who completed the A.I. course received a PDF file showing their percentile score. That’s what separated Stanford from similar but less notorious experiences: Recognition in the form of a non-official yet symbolically important credential. A non-technological change but one that tapped directly into recipients’ motivation.

As this short movie explains, a MOOC “is not just an on-line course. It’s a way to connect and collaborate […] It is, maybe most importantly, an event. An event around which people who care about a topic can get together, work and talk about it in a structured way.”

The success of MOOCs depends first and foremost on attracting “people who care about a topic.” So, at a more basic level, 160,000 students signed up for the course fundamentally because they cared about Artificial Intelligence. If we want to equip young people with what it takes to make good use of technology-enabled learning opportunities, we need to teach them to care. Although the cartoon below is hyperbolic, it suggests that most kids these days may not need to be taught how to use a computer or a smart phone. Conversely, what they might need are opportunities and guidance to develop interest and foundational knowledge in subject matter – see here.


In the past, research of and attention to open online courses focused on technology and content. More recently, however, because the courses require rich interaction among participants, attention has shifted away from technology and toward social aspects. In other words, people realize that MOOCs are as much about forming relationships with others who have similar interests, as they are about the interests themselves: People learn as they interact with one another.

In sum, if individual motivation and social interaction are keys to becoming a self-directed learner, the questions we need to be asking ourselves, as I have argued elsewhere, are not primarily about what technologies to introduce in our schools and colleges, nor are they about what digital skills to teach to our (already tech-savvy) students. Rather, the fundamental conversation we need to have is about how to motivate and engage students with different fields and topics, as well as how to interest them in the kinds of social relations that help foster and sustain learning.

Open courses have been around a while, but we need to know more about how they work and who they serve. The Caledonian Academy, for example, is examining and systematizing the strategies that adults use to self-regulate their learning. But, until we know more about this complex phenomenon, we should keep in mind that open content, an internet connection and some computer skills will not necessarily result in generalized access or a free education for all.

Of the original 160,000 who signed up for the Stanford AI course, 23,000 completed it (15 percent); 137,000 did not. What separates these two groups? Answering this and related questions might illuminate the more general issue at stake, namely the specific conditions under which people become autonomous, life-long (or 21st century) learners. Only when we know more – and technology may be instrumental to this goal – will we be able to foster the right learning conditions for broader segments of society. Only then will innovations like open online education be truly open for all.

- Esther Quintero

Test Scores Often Misused In Policy Decisions Joy Resmovits

Education policies that affect millions of students have long been tied to test scores, but a new paper suggests those scores are regularly misinterpreted.

According to the new research out of Mathematica, a statistical research group, the comparisons sometimes used to judge school performance are more indicative of demographic change than actual learning.

For example: Last week's release of National Assessment of Educational Progress scores led to much finger-pointing about what's working and what isn't in education reform. But according to Mathematica, policy assessments based on raw test data is extremely misleading -- especially because year-to-year comparisons measure different groups of students.

"Every time the NAEP results come out, you see a whole slew of headlines that make you slap your forehead," said Steven Glazerman, an author of the paper and a senior fellow at Mathematica. "You draw all the wrong conclusions over whether some school or district was effective or ineffective based on comparisons that can't be indicators of those changes."

"We had a lot of big changes in DC in 2007," Glazerman continued. "People are trying to render judgments of Michelle Rhee based on the NAEP. That's comparing people who are in the eighth grade in 2010 vs. kids who were in the eighth grade a few years ago. The argument is that this tells you nothing about whether the DC Public Schools were more or less effective. It tells you about the demographic."

Those faulty comparisons, Glazerman said, were obvious to him back in 2001, when he originally wrote the paper. But Glazerman shelved it then because he thought the upcoming implementation of the federal No Child Left Behind act would make it obsolete.

That expectation turned out to be wrong. NCLB, the country's sweeping education law which has been up for authorization since 2007, mandated regular standardized testing in reading and math and punished schools based on those scores. As Glazerman and his coauthor Liz Potamites wrote, severe and correctable errors in the measurement of student performance are often used to make critical education policy decisions associated with the law.

Top of Form


Bottom of Form

"It made me realize somebody still needs to make these arguments against successive cohort indicators," Glazerman said, referring to the measurement of growth derived from changes in score averages or proficiency rates in the same grade over time. "That's what brought this about." So he picked up the paper again.

NCLB requires states to report on school status through a method known as "Adequate Yearly Progress." It is widely acknowledged that AYP is so ill-defined that it has depicted an overly broad swath of schools as "failing," making it difficult for states to distinguish truly underperforming schools. Glazerman's paper argues NCLB's methods for targeting failing schools are prone to error.

"Don't compare this year's fifth graders with last year's," Glazerman said. "Don't use the NAEP to measure short-term impacts of policies or schools."

The errors primarily stem from looking at the percentage of students proficient in a given subject from one year to the next -- but it measures different groups of students from year to year, leading to false impressions of growth or loss.

And using testing data in different -- more accurate -- ways would likely result in states pouring their resources into different groups of schools. "Differences in scores between two cohorts – say, fourth graders one year and fourth graders the next year – are comparisons of two different groups of students," Matthew Di Carlo, senior fellow at the Albert Shanker Institute, wrote in an email. "They do not even necessarily reflect real student progress, to say nothing of whether the changes can be attributed to schooling factors."

The counting flaws highlighted by Glazerman's paper are particularly significant as states revamp the way they hold schools accountable for their performance. Though attempts to rewrite No Child Left Behind fizzled out in Congress this fall, states are rewriting the way they target schools for interventions through waivers that get them out of NCLB-style reporting. The federal Education Department has already received waiver requests from 11 states, and one of the conditions for getting a waiver is developing a new accountability plan.

"It's gone under the radar with the stalled reauthorization process," said Doug Harris, a University of Wisconsin professor who wrote a recent book on education performance metrics. "You get really different answers depending on what you do with these numbers. You can talk all you want about what you do with failing schools but if you haven’t identified schools that are failing, it's a waste of time."

Glazerman's paper provides equations to help solve these errors. Meanwhile, researchers hope that school districts wise up when using test scores to drive policies, such as teacher evaluations.

"Using these data for resource allocation, staffing and other high-stakes decisions means that accuracy and fairness must be the primary considerations," Di Carlo wrote. "Most assessments aren’t designed to measure school and teacher effects in the first place; if they are to play a productive role in that capacity, it will have to be done in the most rigorous feasible manner: using longitudinal data, adjusting for non-schooling factors and interpreting the estimates in a responsible way."

Monday, April 9, 2012

Albert Shanker Institute


Does Money Matter in Education?

Over the past few years, due to massive budget deficits, governors, legislators and other elected officials are having to slash education spending. As a result, incredibly, there are at least 30 states in which state funding for 2011 is actually lower than in 2008. In some cases, including California, the amounts are over 20 percent lower.

Only the tiniest slice of Americans believe that we should spend less on education, while a large majority actually supports increased funding. At the same time, however, there’s a concerted effort among some advocates, elected officials and others to convince the public that spending more money on education will not improve outcomes, while huge cuts need not do any harm.

Our new report, written by Rutgers University professor Bruce Baker and entitled “Revisiting the Age-Old Question: Does Money Matter in Education?” reviews the body of research on spending and educational quality.

Baker concludes that, despite recent rhetoric, “on average, aggregate measures of per-pupil spending are positively associated with improved or higher student outcomes,” while “schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes.” Finally, reviewing the high-quality evidence on the effect of school finance reforms, he asserts: “Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes.”

The executive summary is pasted below.


Executive Summary

This policy brief revisits the long and storied literature on whether money matters in providing a quality education. Increasingly, political rhetoric adheres to the unfounded certainty that money doesn’t make a difference in education, and that reduced funding is unlikely to harm educational quality. Such proclamations have even been used to justify large cuts to education budgets over the past few years. These positions, however, have little basis in the empirical research on the relationship between funding and school quality.

In the following brief, I discuss selected major studies on three specific topics; a) whether money in the aggregate matters; b) whether specific schooling resources that cost money matter; and c) whether substantive and sustained state school finance reforms matter. Regarding these three questions, I conclude:

1.     Does money matter? Yes. On average, aggregate measures of per-pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

2.     Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

3.     Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

o    Many of the ways in which schools currently spend money do improve student outcomes.

o    When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.

o    Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter, and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

What Happened to Data Driven Education Reform?



"The path to real reform begins with the truth," stated Education Secretary Arne Duncan in 2009 during an education forum with the Data Quality Campaign. Sec. Duncan, who argues that policymakers should use "data to drive reform," strongly believes that education policy should be "framed by evidence."

We agree.

So why is the secretary reacting so negatively to evidence about teacher compensation? Writing in the Huffington Post on Wednesday, Sec. Duncan shifted from data to emotion, stating that our report on the compensation of public school teachers "insults teachers and demeans the profession."

He is referring to a section of our report showing that traditional skill measures, such as years spent in school or level of degree obtained, do not provide an accurate salary comparison of teachers to non-teachers. Although public school teachers earn less, on average, than similarly credentialed non-teachers, the wage penalty disappears when teachers and non-teachers are compared using objective measures of cognitive ability, as opposed to years of university education.

Our report is a long and detailed analysis of salaries, fringe benefits, and job security for current public school teachers, intended to add to the state of knowledge regarding teacher pay and education policy in general. It is exactly the kind of research Sec. Duncan should find useful to inform the on-going conversation about reforms that would better reward effective teachers. Our results are clear: Teacher salaries are at roughly market levels, but generous fringe benefits and job security push teacher compensation well ahead of comparable private sector workers.

Sec. Duncan leveled two specific charges. First, he writes that we "exaggerated the value of teacher compensation by comparing the retirement benefits of the small minority of teachers who stay in the classroom for 30 years, rather than comparing the pension benefits for the typical teacher to their peers in other professions."

That is false. While we did use a 30-year veteran teacher as part of a simple example to begin our pension discussion, our actual estimate of pension values is based on the "normal cost" of providing benefits. This is the contribution to the pension fund that actuaries have decided is needed each year in order to have enough money to pay benefits in the future. Actuaries take into account many factors, including the fact that some teachers do not stick around long enough to collect benefits. So our estimate is a true average of what teachers collect. If we actually did what Sec. Duncan suggested we did -- counting only teachers with full 30-year careers -- the pension value would be much higher than what we report.

Sec. Duncan also says that we "appeared to create out of thin air an 8.6 percent 'job security' salary premium for teachers -- despite the fact that hundreds of thousands of education jobs were lost in the recession and teachers continue to face layoffs."

Job security is not the same as a job guarantee. Of course some teachers have lost their jobs, but the data on unemployment show that, over the last decade, public school teachers were only half as likely as workers in other white collar occupations to become unemployed. That extra security has a value, and our paper describes in detail our method for quantifying it.

Aside from Sec. Duncan's sudden aversion to unwelcome data, what is most disappointing here is the lost opportunity to find common ground. We agree with Sec. Duncan that a much more flexible, performance-based teacher compensation system needs to be implemented, to reward effective teachers. Let's also agree to be consistent in pursuing evidence-based reform.

Andrew G. Biggs is a resident scholar at the American Enterprise Institute. Jason Richwine and Lindsey Burke are senior policy analysts at the Heritage Foundation.