What Is Wrong with the OPT & OCAP?
(Ohio Proficiency Test & Ohio Competency Analysis Profile)

by Anita B. Hoge

Home | Audio | Buy | Contact | Downloads | FAQ | Links | | TOC | Videos

Background--What’s all the fuss about Ohio testing?

The Ohio Proficiency Test is called the "OPT" and the Ohio Competency Analysis Profile is called the "OCAP." These tests are not about academic testing. That is a myth. Since massive federal entitlements for education were accepted in Ohio, a national curriculum and national testing have changed our locally-controlled neighborhood schools. In fact, the definitions for what will be tested in reading, math, citizenship, and science have changed. Academics have been replaced by a testing method called "beyond text" or "extended meaning." This means the test is measuring a child’s values, opinions, attitudes, or beliefs. Tests of this type, including the OPT and OCAP, are psychological in nature and cannot be objectively scored. Usually, this type of testing requires a judge to "hand score" the test; no score key is available.

When subjective (affective) test questions are used, parents must ask these questions:

What happened to teaching the 3 R’s?

How is "honesty" measured?

How much is too much or not enough "honesty" to pass the OPT? To receive a diploma?

How is "honesty" scored? (Our legislators should also be required to take this test!)

Who decided what the standard for "honesty" would be?

Was this a local decision or a higher-up, top-down decision?

How will a student be remediated on "honesty"?

What type of curriculum would change my child’s attitudes and values? Is "mental health" the new curriculum?

Is this legal?

Is my child protected from invasive and indoctrinating programs driven by political agendas?
How am I protected as a parent?

Who has the ultimate authority over my child -- the parent or the State?

The OPT is administered across Ohio every spring, in March. All 4th, 6th, 9th, and 12th grade students are administered the state proficiency tests in reading, writing, mathematics, citizenship, and science in every school. Beginning in July, 2001, any fourth grade student who fails the 4th grade proficiency test will NOT be promoted to the fifth grade. The 1998 Ohio Legislature passed into law Senate Bill 55, Sec.50.16 (4)(b) and Sec. 3313.61, which states that students must pass the 9th grade proficiency test to receive a high school diploma. By 2004, all students must pass the 10th grade proficiency test to receive a diploma. This is the beginning of a "state diploma," resulting in the loss of local control.

The state of Ohio must submit an Ohio Consolidated State Plan that is aligned to the federal standards as part of fulfilling the requirements for receiving federal money. Goals 2000: Educate America Act, Improving America’s School Act, the Workforce Development Act (school to work), and the Carl Perkins Vocational Act are some of the federal entitlements which mandate that the Ohio plan be closely aligned to the federal standard setting process.

Under Improving America’s School Act, ALL students must be evaluated by federal Title I criteria. Title I is designed to address deficiencies in "educational opportunity" and to regulate special education availability. Instead of being limited to the original group of students, Title I criteria identifies every individual child in the school system and monitors the child to see if he/she is meeting the national standards. The accountability instruments that measure these standards in Ohio are Ohio’s proficiency test (OPT) and the OCAP test. Students identified by test results as not meeting the standards will be provided interventions or remediation. Higher-order thinking skills and critical thinking skills, such as abstract reasoning and problem solving, are assessed according to Improving America’s Schools Act. These terms are buzzwords indicating that subjective and affective material is being tested, not solid academics.

The Ohio Competency Analysis Profile (OCAP) test is a national model for "Work Keys" assessments, part of the federal school-to-work agenda, which will be integrated into the OPT. This federal initiative offers monies from school-to-work bills to assess "workplace readiness" skills -- identified by the Department of Labor such as "honesty" and "integrity." These are also called "employability skills."

The OCAP is part of a national pilot testing system that measures subjective (affective) areas that cannot be tested or scored objectively. These ambiguous standards will initiate the criteria for a "Career Passport." The Career Passport will replace the high school diploma and will be issued by the State, not the local district. This new state diploma is destined to be used for job entry, college admissions, drivers’ licenses, or specific scholarships or tuition credits. This is all controlled and directed by State overseers who will make the choices for the future of our children.

Guidelines will be imposed for quotas on specific job clusters, similar to what Europe has been doing in socialist countries. The federal workforce development grant specifically uses an 80% - 20% ratio; 80% to determine who will be allowed entry into the workforce; 20% to determine who will be allowed entry into college. This is why the tests; and what will be tested on the tests, are so important. Young Ohioans must take and pass these "high stakes" Ohio Proficiency Tests to take part in this federal scheme. So, who will decide your child’s future? The State.

What Is a High Stakes Test?

A "high stakes" test is a test for which there are consequences when a student does not perform well. With the passage of SB 55 and HB 282, the OPT is now a high stakes test in Ohio. Students are required to pass the controversial OPT test in order to receive a diploma. In order to receive a diploma students shall demonstrate "proficiency" in reading, writing, math , science and citizenship on state proficiency tests administered at grade 10. Also, 4th graders must pass the 4th grade proficiency to be promoted to the 5th grade. This begins in July, 2001.

SB 55 also sets up criteria for a school to come under the title of "effective," "continuous improvement," "academic watch" or a "academic emergency" based upon the scores of the OPT. A "report card" will be issued for each district. There are 27 performance indicators that a district is graded on. 25 of those indicators are based on the OPT scores. The other 2 indicators are graduation and attendance rates. A school that scores 27 or 26 is "effective." A 14 to 25 score is listed as "continuous improvement." A 9-13 score will put a district on "academic watch." A score of 8 or lower places a school in "academic emergency." Any district in the watch or emergency classification has 120 days to set out a three-year continuous improvement plan.

These "high stakes" tests are not academic achievement tests. Children’s test scores on these proficiency tests are a poor indication of solid academic instruction and content knowledge, yet individual children, teachers and school districts will be penalized or rewarded almost solely on the basis of these test scores.

Other states have implemented similar legislation called "academic bankruptcy" ("accountability" ) in which the State can go into the local district, remove the locally elected school board, administrators and teachers and replace them through the Governor’s office with an appointed board or re-hired personnel. So much for local control!

What Is Wrong with Using the OPT Test for Graduation?

1. The standards are ambiguous and still in draft form.

The Ohio Department of Education has control of all aspects of the test. First, they say they are developing the test. They determine what will be tested. Second, the Department of Education selects the standards for each of the state learner "outcomes." They create the "rubrics" (scoring criteria) to determine what is a passing grade. An example of a learner outcome is "honesty." The State determines how much honesty is needed to receive a passing score.

The Ohio Department of Education arbitrarily decides what is an advanced, proficient or partially proficient score, although these criteria are amazingly similar to the federal criteria in Goals 2000. No one really knows for sure what standards have been set by the Department. What does a student really have to know and do to be proficient on a test that is open-ended, includes subjective questions, and uses essay questions for that standard? In the end, it is the Department of Education who not only controls the entire process but also determines whether or not your child will receive a diploma – a "Career Passport." The procedures for equating, scaling, combining scores and reporting data are not legally defensible. The Cleveland Plain Dealer (2/25/00) reported that at "assembly-line pace, graders read as many as 200 hand-written essays per shift and receive bonuses of up to $1 an hour for speed, accuracy and good attendance."

2. Essay formats and open-ended questions cannot be scored statewide by different people, and still be technically or accurately reliable comparisons.

Human error in scoring could determine whether or not a student would receive a diploma. A February 25, 2000 Cleveland Plain Dealer article entitled "Part-time workers check 30-35 exams an hour" states: "$8.50-an-hour temporary workers gather each spring to decide whether your son or daughter gets a high school diploma. With a college degree and two or three days of training, these temps – including a one-time banker, a makeup artist and a pizza maker – assume a role that many parents, teachers and even some of the state’s top leaders mistakenly think is performed by educators."

The judges and scorers score the tests according to their own opinions or biases. Scorers can inevitably differ in their opinions. Scorers have admitted that they get tired after 6-1/2 hours of scoring "at assembly line pace."

In other words, the test is phony. This test, which affects the entire future of a student, could rest solely on the basis of scorer subjectivity and "political correctness." The test cannot stand on its own merits.

Tests that impact the future of a student should not be subjectively evaluated! "But, there are questions, questionable questions on these tests….", said Justice Resnick, Ohio Supreme Court, Nov. 1996 (Rea VS Ohio Department of Education, 96-1997).

What Impact Does High Stakes Testing Have on Students?

"These tests are about failure. They are telling children in our state, you don’t measure up. You fail to meet basic minimum standards. For those who fail, that’s an important and tragic consequence in their life," said Justice Pfeifer, Ohio Supreme Court, (Nov. 1996).

The National Academy of Sciences recently reported its findings on using minimum competency tests to retain students in their current grades or to deny them high school diplomas. The report called "High Stakes Testing: Tracking, Promotion, and Graduation" found:

On average, children who were retained were worse off, in terms of both personal adjustment and academics, than their promoted counterparts.

Students who repeated a grade were significantly more likely to drop out of high school than students who weren’t repeating a grade.

Failing a minimum competency graduation test significantly increased the likelihood that a student would leave school, especially for minority and/or bilingual students with good grades.

There is little evidence that graduation exit exams have been properly validated against the defined curriculum and actual instruction.

Whether through negligence or indifference, the public has been misled into thinking that student achievement would improve substantially as a result of implementing a graduation test. On the contrary, there are substantial reasons to believe that Hispanic and African-American students will continue to suffer great harm from the expansion of the OPT with unproven benefits.

A Youngstown State University professor, Randy Hoover, recently was quoted in the Akron Beacon Journal (2/28/00) regarding a study he conducted on the test. He noted that the school district’s report card "is extremely misleading, and the general public should be outraged about its use." Hoover’s study "found that test scores were most strongly connected to economic, social and environmental factors beyond the control of classroom teachers." In Texas from 1993-1997, nearly 70% of all the students held back in Texas public schools were Hispanic or African-Americans. The agency provided no evidence to confirm that those students benefited from the decision to retain them.
It is very clear that high stakes tests are biased toward minorities and high poverty schools. This certainly indicates the need to think very strongly about equity issues, the need to address poverty and concentration of poverty issues, and the impact of this system on the children and schools which fall below the line in spite of their best efforts.

"Statistics are now available to prove that states WITHOUT high stakes tests out perform states that have them." ("Fair Test Review," Ed Week, 1998)

Is There an Obsession to "Teach to" the Test?

The state of Ohio has a state "model curriculum." In fact, HB 850 states, "The State Board of Education shall establish model competency-based programs."

Performance objectives, the model curriculum for instruction, and recommended assessment methods must be aligned for meeting the recommended performance objectives. Aligning teacher in-service training to these goals rounds out the state control of education. The Ohio Department of Education developed the test first, without the state learner outcomes in place. However, it is only common sense to realize that the test must be based on a set of standards. The Ohio State Consolidated Plan and the Vocational Education State Plan rely on employability standards and the National Assessment of Educational Progress (NAEP). This creates the hand-in-glove criteria for modeling the national test.

This alignment to state and federal goals is called "systems management" and is based on applying Total Quality Management (TQM) practices to schools and children. Top-down federal control is designed to monitor all aspects of the education process -- student, curriculum, testing, teacher, school, school district, state, nation. The OPT/OCAP will meet the accountability standards for each level of that process. Education Management Information System (EMIS) will be the data system that will track each component.

The release of the Integrated Technical Academic Competencies model by the Ohio Department of Education includes a full range of resources -- everything from sample lesson plans to critical information on classroom activities and games, professional development programs, and an array of internet resources. This model curriculum identifies what must be taught to score well on the OPT and OCAP assessments. These tests become the driving force for all schools to change their curriculum and to gear their teaching activities toward "teaching to the test." Teachers must teach to the test or they will not meet the State standards.

There is no evidence that "high stakes" testing narrows the gaps in achievement. In the worst cases it produces an obsession with the test and test preparation, or a manipulation of testing conditions. The message of the system is very clear -- nothing else matters throughout the entire educational experience unless the student passes this one test.
A teacher’s familiarity with the test content can artificially inflate students’ test scores. This apparent rise in test scores would NOT be a genuine rise in student accomplishment, but would reflect a manipulation in test taking techniques.

Since test taking is a teachable skill, particularly with the competency-based model of testing, teaching to the test can replace and displace teaching academic subjects. Clearly, this system will narrow the curriculum content. Whatever is not tested will be eliminated from academic instruction. Students will be dumbed down to fit the narrow criteria of the test.

Lou Rene, a 4th grade teacher with 28 years of experience, has recently been quoted as saying, "I can no longer teach what I think the students need," she said. "I have to teach them what the state tells me to." (Vindicator, Feb.1,2000)

Are the Tests Actually Testing What Is Being Taught in the Classroom?

The claim that the tests are based on academic standards is misleading. The OPT test was developed first, before the standards were in place. There are legal ramifications for the Department of Education and the State Board if a student can prove that what was tested on an exit exam or high stakes test like the OPT was never taught. One Ohio legislator is currently proposing legislation to prevent parents from suing the State. The indication that SB 55 (Sec. 3302.04) will give schools three years to adjust their curriculum and teaching methodology is very revealing. The test becomes the gauge for what will narrowly be taught; "just teach enough for kids to pass the test" is the message that the State is giving teachers.

There has been enormous curricular diversity across the state, which is a reflection of true local control, with what is taught in different school districts varying substantially. Not only "what" is taught from school to school can differ, but "when" or "how" a subject is taught and what time of the school year it is taught, are major points of diversity. With the advent of the OPT and OCAP, this diversity could potentially constitute a major mismatch between what is tested and what is taught. In order to ensure that students will be eligible for graduation, schools must now ensure that their curriculum aligns to the OPT, the determiner of eligibility for a diploma.

Are the Incentive Grants Legally Defensible?

Included in the legislative budget for the past few years are funds that would be awarded to school districts which improve their scores on the OPT. These incentive grants are based on accountability data relating to:

a. Achievement -- improvement in achievement as determined by scores on the OPT. There are 25 indicators taken from OPT scores.

b. Effort -- improvement in school graduation and attendance rates. There are 2 indicators: attendance and graduation percentages.

The State is using incentives to create an illusion that general "education" is taking place. The controversial OPT is being used as a measuring stick for that instruction. The test cannot be accurately used to determine what is actually being taught unless the State limits curriculum choices to what is tested. It is obvious that school districts are being given incentives for teaching to the OPT test and must abandon their own locally determined curricula.

The questions that school districts should ask regarding the use of the OPT as an accountability measure for incentive grants are:

1. Are the tests legitimately comparable from one year to the next? No. The tests are administered in grades 4, 6, 9, 10, 12. The following year, tests are administered to different students in grades 4, 6, 9, 10, 12. These scores are being compared for the incentive grants. Different students are being compared using totally different variables for these comparisons. In other words, the students from one class may have a differential in numbers of students in aptitude, socioeconomic status, background, or exposure to advanced programming. There also may be differences in the number of special needs students compared to other years. The basis of comparisons should be done with the same students in three years to use compatible data.

2. Is the content of the assessments controversial? Yes. The content of the assessments is controversial because the essays are matched to non-academic standards (criterion-referenced) and are open-ended. The fact that different students take the test from year to year, and that the content of what the students will write on the essays will change from year to year will create different results every single year the test is given. Because there are no right or wrong answers, only an arbitrary evaluation of what a student will write at a particular sitting is not an appropriate means to use for receiving incentive grants or the high stakes criteria for graduation.

3. Is the scoring of essays controversial? Yes. The scoring of the essays has been controversial because of the nature of the essay and open-ended questions. Every student’s writing sample on an essay or open-ended test will be different from every other student’s. Because of these differences, a scorer must judge or give an opinion of where a score will be aligned to a rubric, whereas a number score would be given to the open-ended question. This is determined by an "anchor" or samples of actual student work. There is no answer key. A "process" is being measured and evaluated, not an objective answer. Therefore, the individuality or creativity of each child is being measured against the States’ process. There is absolutely no reliability in tests that are hand scored according to an opinion of a judge. The scores are totally arbitrary, yet, may be the determining factor for a student being denied graduation, promotion, or identified for remediation. Not only is this system of accountability not appropriate, it does not meet the legally defensible criteria for a credible test that measures academic achievement. It is not a fair test and should not be used for a high stakes measure nor should it be used for incentive grants based on data from one year to the next.

4. What about age appropriateness of the tests/assessments? Age appropriateness is another legal factor involved with the tests. At a recent town hall meeting in Warren (Amanda Davis, Vindicator, Feb. 1, 2000) a professor of teacher education at Youngstown University, Randy Hoover, stated that he has documented evidence that the 4th Grade Ohio Proficiency Test is invalid because of age inappropriateness. Dr. Penny Arnold, assistant professor at Ashland University, has noted that the test questions were not age appropriate. Some of the questions are based on information that would only be appropriate for upper grade levels in 8th or 10th grade, assuring failure.

What’s In A Score?

The results for the OPT are sent back to the schools in 45 to 60 days after the test is administered, which illustrates the inevitable opportunity for inaccuracy in scoring the open-ended and essay portions of the tests. If a student scores below the state standards, that student will be identified as "at risk" for remediation, marked to receive intervention guaranteed to meet the state standards.

In the Reading and Writing sections, each student will write a separate and completely different answer to each essay or open-ended question. However, the State will try to uniformly score these essays, placing all students in a pass/no-pass position. How is this done in a fair and objective manner with open-ended questions and open-ended responses? How can open-ended and subjective questions be answered "correctly"?

The test results do not tell the students anything substantive. No one knows why the student passed or failed. No one really understands what the scores mean. This is not fair to the students, the teachers nor the school districts who will be rewarded or penalized based upon the students’ test scores.

The test is really a fake. We believe that the test should be opened up to the public. The tests should be returned to the students along with the scoring so that parents and students can judge for themselves the impact of these tests and whether or not the scoring is arbitrary. To think that an open-ended test could determine whether a student receives a diploma, or is eligible for college entry or job entry, is ludicrous and scary.

What Impact Do the OPT Scores Have on Schools?

The results of the state assessment are used to produce "school report cards." Schools must score above 14 on the performance standards or else they will be mandated to develop a continuous improvement plan. The stress that the report cards produce on administrators, principals and teachers must certainly be transferred to the students! Many teachers or administrators are not speaking out about the OPT because of fear for their jobs.

What Impact Do the OPT Scores Have on Teachers?

Because a "total quality management" (TQM) system is being used for quality control, the teachers will be evaluated according to how well their students perform on the State assessment. An integrated management system will identify immediately those who are not measuring up to State standards. If students in a certain class do not do well, the teacher can easily be identified as not "teaching to the test." Teachers will be forced into compliance; they may be either sanctioned or forced to resign. The EMIS, Ohio’s electronic data tracking system, is prepared to monitor all testing results through cross-matches of students, curriculum, teachers, and schools. It may be worth noting that the state cannot legally identify any individual student without violating federal regulations. However, there are indications that the contractors do not have to abide by these regulations.

There are also no protections for identifying teachers who are not complying with federal mandates under state legislation. Under the federally-adopted Malcolm Baldridge Award criteria (TQM), which is the accountability system for teachers to teach to the test and be held accountable for results, those teachers who do not comply with the accountability system are removed from the system. This process is known (incredibly!) as "waste management elimination." Teacher evaluations based on state assessment results have been done in other states. We can expect to see this happening in Ohio.

Is the "Academic Watch" or the "State of Academic Emergency" an Attempt for the State to Take Over the Schools?

The new legislation in Ohio’s SB 55 clearly calculates identifying those schools that are not in step with the federal agenda by using the OPT scores. These schools would be identified and subject to a "continuous improvement plan" for three years to change their strategies and resources to meet State standards. If a school district is determined to be academically bankrupt they will be placed on "academic watch." This State power grab is based on the phony criteria of the OPT.

All of the preceding points have explained the tremendous problems with the OPT test. It is imperative that school boards understand the significance of the top-down management which usurps local control that has been implemented under federal legislation. Goals 2000, the Workforce Development Act (school to work), Improving America’s School Act, and Carl Perkins Vocational Education have totally eliminated "local control," and any of the original power of the elected school directors except for levying taxes. There are already over 160 schools that have been identified under "academic watch." This is an indication that something is wrong with the test -- not the schools.

"Storm the State House."

Maggy Hagan, a Garfield Elementary teacher organized the town hall meeting in Warren, Feb. 1, 2000, to discuss the pitfalls of the OPT. No one spoke in favor of the tests, including the legislators who were present. In fact, a legislator from Youngstown plans to introduce a resolution asking that all testing at all grade levels be stopped until the process can be investigated fully. "The tests have no worthwhile impact on educating our children." (Vindicator, Feb.1, 2000)

What you can do:

1. Contact your legislator. Tell him/her to support a resolution to STOP the OPT/OCAP tests NOW!

2. Know your rights as a parent. Consider carefully the choice to OPT your child OUT of the Ohio Proficiency Test.

3. Gather information on the testing procedures, content, and laws governing the test. Get internet contacts and visit websites that can help you connect to other Ohioans. The web site address for O.P.T. – OUT is:


4. Get help. Seek advice. For more information contact:

Steve Rea 330-332-9386
Robert Melnick, Esq. 330-744-8973
Anita Hoge 330-665-4167