Curtis L. Frazier
Department of Political Science
University of Houston
Houston, TX 77204-3474
POLS2ET@uhupvm1.uh.edu

{short description of image}
Government Complexions and Policy Outputs: European Parties
Southwest Social Science Association Meetings, Houston 1996
{short description of image}Representation depends upon - among other things - parties passing and implementing policies upon which they campaigned. This study seeks to determine whether parties do indeed meet this criterion for policy representation.
{short description of image}Elections are intended to be a link between the public and the government. The public voices its policy preferences through their votes. However, if the political parties campaign on one set of issues, the public supports them on those issues, and then the parties reverse course (systematically), then the electoral mechanism for ensuring representation has failed.
{short description of image}This study finds that parties do indeed pass policies which are consistent with their underlying ideology. This is especially true in terms of the size of government and welfare spending.

Parties, Publics and Politics: A Comparative Perspective
Midwest Political Science Association, Chicago 1996
ABSTRACT
{short description of image}The question this paper addresses is ostensibly a simple one, "do parties matter to policy outcomes." In the context of European politics, this issue has proven to be more complicated than we might imagine on a prima facia basis. The debate on this far ranging topic usually pursues one of two distinct tracks. First, parties do matter. They determine government complexion (left or right), coalitions, and responsiveness to public opinion. Others, however, hypothesize parties do not matter because environmental constraints such as socio-economic factors, trade unions, and foreign and domestic political institutions, circumscribe their actions. It has also been hypothesized that parties have become undifferentiated and indistinct, leading to a corresponding similarity in policy outcomes. This paper seeks a middle ground between these competing schools of thought.

The Effectiveness of School Choice in Milwaukee: A Secondary Analysis of Data from the Program's Evaluation
American Political Science Association Meetings, San Francisco 1997


{short description of image}In 1990 Milwaukee became the site of the first publicly funded school choice program providing low-income parents with vouchers that could be used to send their children to secular, private schools. Milwaukee's school choice experiment was evaluated by a research team headed by political scientist John Witte at the University of Wisconsin at Madison. In five annual reports issued between 1991 and 1995, the researchers (hereinafter referred to simply as Witte) reported on the effectiveness of the Milwaukee experiment, as measured by the performance of students on standardized mathematics and reading tests. The senior author has summarized the results of his investigation as follows: "This school experiment . . . [has] not yet led to more effective schools. . .. Choice creates enormous enthusiasm among parents . . . but student achievement fails to rise."

{short description of image}Since this evaluation, until now, provided the only source of information on the test performance of choice students, many scholars, groups and foundations, drawing upon its findings, have concluded that school choice is not an effective way of improving the education of low-income, central-city students. The Carnegie Foundation for the Advancement of Teaching declared: "Milwaukee's plan has failed to demonstrate that vouchers... can spark school improvement." Albert Shanker, president of the American Federation of Teachers, claimed that the "private schools [in the Milwaukee choice plan] are not outperforming public schools."

{short description of image}For five years, the researchers did not release data from the evaluation for secondary analysis by other members of the scholarly community. But in February of 1996 they made the data available on the World Wide Web. Over the past several months the Center for Public Policy at the University of Houston (CPP) and the Program in Education Policy and Governance at Harvard University (PEPG) have accessed the data, cleaned them of identifiable errors, and organized them into a readable usable format.

{short description of image}Although the certainty with which conclusions may be drawn is restricted by certain data limitations, results based upon the highest quality information contained within the data set indicate that attendance at a choice school for three or more years enhances academic performance, as measured by standardized math and reading test scores. Correcting for errors in the dataset and using appropriate analytical techniques, the CPP/PEPG analysis of student performance finds that students enrolled in choice schools for three or more years, on average, do better on standardized tests, than a comparable group of students attending Milwaukee public schools.

{short description of image}The results indicate that the reading scores of choice students in their third and fourth years, were, on average, from 3 and 5 percentile points higher, respectively, than those of comparable public school students. Math scores, on average, were 5 and 12 percentile points higher for the third and fourth years, respectively. These differences are substantively significant. If similar success could be achieved for all minority students nationwide, it could close the gap separating white and minority test scores by somewhere between one-third and more than one-half.

{short description of image}CPP/PEPG results are based on data derived from a natural experiment that randomly assigned students to a test and control group. The natural experiment was the product of a mandate imposed on the program by the Wisconsin state legislature. It required choice schools, if over-subscribed, to admit applicants at random. This mandate created two randomly selected groups of students, one selected to participate in the choice program, the other not selected. The experimental situation is not unlike that widely practiced in medical research, where individuals are randomly allocated to treatment and control groups. The data are thus quite well suited for drawing scientific conclusions about the effectiveness of the choice program, provided they are analyzed correctly and interpreted cautiously.

{short description of image}The earlier analysis of the Milwaukee choice program did not give careful attention to this experimental data. On the one occasion when the experimental data were examined, the researchers failed to employ appropriate analytical techniques. The bulk of their research efforts focused instead on comparisons between choice students and a much less disadvantaged cross-section of public school students. No valid conclusions can be drawn from the comparisons they conducted.


Education Policy Research: Facts or Artifacts?
Midwest Political Science Association Meetings, Chicago 1997
{short description of image}Why do education research projects using the same data produce strikingly different results? In this paper, we use data simulations to demonstrate how different methodology can produce different results.
{short description of image}We construct a series of datasets with known properties based upon random variable creation and peformed the most often usededucational evaluation methods. Through this method, we demonstrate that analysts commonly fail to recognize their base assumptions. The most common assumption deals with the homogeneity of samples. Educational evaluation traditionally deals with a treatment being given to an underpriviledged group of students. Finding an appropriate comparison group can be difficult. Without a valid comparison group, more stringent evaluation methods must be employed to avoid bias.
{short description of image}Bias within education evaluation studies generally favors the control group. This is because analysts fail to recognize the different pre-treatment slopes of the distinct samples. Therefore, a finding of non-significant improvement can actucally be interpreted (in some cases) to indicate program effectiveness. For example, if the control group improves 5 points per year before the study and 10 points per year after the treatment, then comparing their new performance to a non-comparable control group which regularly improves 10 points per year will yield insignificant results.
{short description of image}This bias manifests itself in both simple test-retest comparisons and ANOVA models as well as more systematic multiple regression models. This study argues that in order to overcome these problems, the analyst must work to find a valid control group, properly identify and estimate sources of bias, and implement research designs which are less likely to affected by any inherent bias.