ACCOUNTING PERSPECTIVES


Spring 1995, Volume One, Number One
THE DYSFUNCTIONAL ATMOSPHERE OF HIGHER EDUCATION:
GAMES PROFESSORS PLAY

D. Larry Crumbley, Louisiana State University

Abstract: More and more current research questions the validity of summative student evaluation of teaching (SET). Yet a persuasive case can be made that the increased use of SET for administrative control purposes has caused grade inflation. By inflating grades and deflating course work an instructor is more likely to receive positive evaluations. Accounting departments may be more vulnerable to lawsuits as higher education continues to inflate grades. This research covers the numerous ways instructors attempt to maximize their SET scores and offers recommendations to curb the dysfunctional aspects of SET. 



 

INTRODUCTION

Higher education is experiencing the simultaneous phenomena of widespread use of student evaluation of teaching (SET), grade inflation, student moral decline (resulting in widespread cheating and plagiarism), and steadily lower student motivation. A panel led by former Senator William E. Brock recently stated that the U.S. undergraduate education system is "a prescription for decline." This group said that colleges and universities are granting degrees to people who lack knowledge and skills taken for granted in a high school graduate not long ago. Education Secretary Richard Riley called the report "a wake-up call" for higher education [Henry, 1993]. A persuasive case can be made that the increased use of SET has caused higher education to become dysfunctional, resulting in a steep, slippery slide in the output quality [Winsor, 1977; Renner, 1981].

As more and more research questions the validity of summative SET as an indicator of instructor effectiveness, ironically there has been a greater use of SET [Newton, 1988; Wright et al., 1984; Powell, 1977; Ditts, 1980; DuCette and Kenney, 1982; Howard and Maxwell, 1982; Worthington and Wong, 1979; Brown, 1976; Procano, 1984; Dowell and Neal, 1982; Stumpf and Freedman, 1979]. A summative SET has at least one question which is a surrogate for teaching effectiveness. In 1984, two-thirds of liberal arts colleges were using SET for personnel decisions, and 86% in 1993 [Seldin, 1984; Seldin, 1993]. Most business schools now use SET for decision making, and 95% of the deans at 220 accredited undergraduate schools "always use them as a source of information," but only 67% of the department heads relied upon them.1 Yet an instructor's grading policy (easier grading = higher evaluations) and course difficulty (easier course = higher evaluations) can be significant factors in determining an instructor's evaluations. Certainly many instructors believe that this leniency hypothesis [Newton, 1988] is valid and take corrective actions to improve their evaluations. At least one-third of the respondents in a 1980 survey indicated that they have substantially decreased their grading standards and level of course difficulty [Ryan et. al., 1980]. Only 20.4% of 559 accounting professors in 1988 agreed with the statement that SET are indicative of an instructor's teaching and should be used directly in calculating annual salary increases [Bures et. al., 1988]. 



DYSFUNCTIONAL BEHAVIOR

If an instructor can choose teaching styles, grade difficulty, and course content, he or she will prefer the choices that are expected to result in higher SET scores. According to Medley [1979], "if teachers know the criteria on which decisions affecting their careers are based, they will meet the criteria if it is humanly possible to do so." As an instructor inflates grades, he or she "will be much more likely to receive positive evaluations" [Worthington and Wong, 1979]. Many enhancement choices are anti-learning, resulting in grade inflation, coursework deflation, and pander pollution (PP) behavior. Pander pollution may be defined as purposeful intervention by an instructor inside and outside the classroom with the intention of increasing SET scores which is counterproductive to the learning process. Widespread use of SET has bred a vast army of pandering professors engaged in pander pollution semester after semester. This pander pollution increases each year because instructors try to enhance their SET scores.2

There are adverse consequences of SET management, and universities should attempt to eliminate these adverse consequences. Many instructors devote much of their teaching time and effort to massaging SET results for administrator and student consumption. Costs to an instructor for SET enhancement by inflating grades or decreasing course work are minor because few instructors are penalized for giving high grades or deflating coverage. Since many of the students in college today are of much lower quality, these same students "are more likely to modify evaluations in response to grade manipulations. [I]t is obvious that, given objectively equivalent teaching skills, lenient markers will tend to receive more positive evaluation ratings than stringent markers" [Worthington and Wong, 1979]. The concept is simple: summative SET + PPs = US, where US is undereducated students.

Many of the steps taken by instructors to improve their SET are counterproductive to teaching effectiveness and the learning process. These dysfunctional techniques include grade inflation, course work deflation, and keeping grade expectations high. For example, one freshman history professor calls it "hosing students." "We give easy or no grades during the semester, distribute the SET, and then give a tough final examination to weed out the students." Another sociology professor gives a few minor quizzes and short papers during the semester, collects the SET data, and then assigns a difficult term paper project. These forms of faculty deception work best in required freshman and sophomore classes where students are forced to take the difficult instructor because of limited classes.

There are four major types of dysfunctional behavior caused by a control system such as SET [Lawler and Rhode, 1976]:

Rigid bureaucratic behavior

    • Behave in ways which will help an instructor look good on the measures that are taken by the SET system (e.g., inflate grades and deflate course work).

Strategic behavior

    • Temporary action designed solely to influence the SET system so the instructor will look acceptable (e.g., keep grade expectation high as long as possible, parties, etc.).

Invalid data reporting

    • Invalid data about what can be done and invalid data about what has been done (e.g., stuff the SET system, pander pollution).

Resistance

    SET is seen as a threat to the satisfaction of many needs and significantly changes the power relationships in the department (e.g., pits one friend against another, breaks up social groups).
There are numerous ways that instructors maximize their SET scores:

1. Inflate grades. In one southern school of business, the average GPA given in business classes is 3.07. 
2. Cover less material. 
3. Easy examinations (e.g., true-false; broad, open-ended discussion questions; take home exams; open books exams). (Instructors have generally deserted tough examinations and research projects.)3 
4. Give parties (e.g., food, donuts, beer, etc.). 
5. Give financial rewards. 
6. Spoonfeed the students. 
7. Give answers to exam questions beforehand. 
8. Don't risk embarrassing students by calling on them in the classroom. 
9. Hand out sample exams. 
10. Grade on a curve. 
11. Give SET early and then give hard exams, projects, etc. 
12. Keep telling student how much they are learning; how smart they are. 
13. Delete exams, projects, and grading altogether. 
14. Teach during bankers' hours (9:00 - 3:00). 
15. Give same exams each semester. 
16. Avoid trying to teach students to think (e.g., avoid the socratic method). 
17. Provide more free time (e.g., cancel classes on or near holidays, Mondays, Fridays, etc.) 
18. Avoid cumulative final exam. 
19. Do not use overheads. If overheads are used, they must be simple. Copies of complicated overheads must be given to students. 
20. Where multiple classes are taught by different instructors, always give the highest GPA to your students. 
21. Allow students to determine grade, coverage, and difficulty. 
22. Teach in classes where common exams are used; then help students pass "this bad exam" which you didn't prepare. Students learn to take the classes taught by the course coordinator (who prepares the exams). 
23. Avoid honors courses. A student who expects an A and then receives an A is more likely to credit himself or herself for the good grade (rather than the instructor). Conversely, a student not normally expecting a good grade will reward an instructor with higher evaluations when the student expects to receive a high grade from that particular instructor.

Why become a better teacher (from the point of view of learning), when an instructor can easily inflate grades, deflate coursework, and keep grade expectations high? There is a universal assumption among administrators that an increase in SET scores is good and a decrease is bad. This myth is a naive and dangerous assumption. Many pandering professors must be forced to decrease their SET scores when they grade deflate and inflate their course work content. (U. of Southern California)

A high SET score may indicate a poor teacher. For example, in a major west coast private university an administrator took control of a master of taxation program and decided to review the effectiveness of his instructors by visiting their classrooms.4 One instructor consistently scored 5 out of 5 on his SET scores in prior years, so the administrator waited until the last class period to review this "superior" instructor's estate and trust taxation class. The administrator found that the instructor had yet to introduce the concept of a complex trust (which should have been introduced before the middle of the semester). Upon further investigation, the administrator found that the instructor (a partner in a big-six CPA firm) was taking all of his students to a local bar after every class and feeding them dinner and drinks. This instructor was merely using the classroom to recruit students (not to educate). High SET scores also may indicate that an instructor is giving the students easy exams, little content, spoonfeeding the students, or giving them the answers to the exams (especially in a regulated course).

There is another universal assumption that students must like an instructor to learn. Not true. Even if they dislike you and you force them to learn by hard work and low grades, you may be a good educator (but not according to SET scores). SET measure whether or not students like you, and not necessarily whether you are teaching them anything. Instructors should be in the business of educating and teaching students--not SET enhancement. Until administrators learn this simple truth, there is little chance of improving higher education. "Teaching is a professional relationship, not a popularity contest. To invite students to participate in the selection or promotion of their teachers exposes the teacher to intimidation [Frankel, 1968, pp. 30-31]." 



DYSFUNCTIONAL BEHAVIOR EXAMPLES

An example of this dysfunctional behavior and grade inflation occurred in a business department ( Texas A&M ). One rigorous instructor in a basic business class gave D's and F's and received SET scores in the one range (on a 5-point scale). She was removed from that class and placed in a nonrequired, graduate course, where she proceeded to give a 50-50 split of As and Bs each semester. She has received SET marks as high as 4.9 since making this adjustment. She tells the students at the beginning of the semester that only As and Bs will be given. This strategy by administrators of assigning tough graders to nonrequired courses allows students to force easy grading by self-selecting away from the tougher instructors.

For example, at one university [Texas A&M], SET are required in liberal arts and business administration courses every semester (except summer). These SET scores are placed in the library for the convenience of students. Furthermore, the grade distribution of all faculty members are made public in the Office of Student Affairs each semester. As is usually the case, the staff and faculty are not allowed to evaluate the administrators on a semester or yearly basis. What is good for the goose, is not good for the gander.

To make matters worse, the one department that should know better has the following statement in the Handbook for Undergraduate Accounting Majors: "The Department of Accounting fully participates in the campus-wide teacher evaluation survey conducted each semester. Results of the survey are reviewed by both the department head and the instructor and are an important component in personnel decisions (e.g., merit raises, promotions)" [emphasis added]. The handbook even shows the "relative instructor rating" [on a 5-point scale] compiled from the Departmental course evaluations for the 86 sections offered by the department during the Spring semester, 1992. Throughout the years, SET data have been routinely used by all tenure and promotion committees of the department. This SET-driven department has taken the "student is a consumer" concept to the extreme to the detriment of learning.5 The advice of J.D. Newton is appropriate: "Departments of accounting should practice what they preach" [Newton, 1988, p. 12]. [Texas A&M has now lost its tenure.]

Even Peter Seldin [1993, p. 17],6 a strong advocate of SET, states: "Don't take assessment data gathered for the purpose of improving teaching performance and then use it for tenure and promotion decisions. Confidentiality of the data must remain inviolate. Should data obtained for the purpose of strengthening performance surreptitiously be used for personnel decisions, it will have an immediate chilling--even fatal--effect on the credibility of the entire faculty evaluation program."

An example gives an insight into the pressures and reaction caused by SET upon young, nontenured instructors. A young faculty member teaching basic accounting [regulated exams and curve], issued grades in the Fall of 1991 with an overall GPA of 2.070 [SET of 3.86, next to the lowest score of 3.61, where highest was 4.91]. In the Fall of 1992, he taught in the masters program and gave 43% A's and 57% B's. Back in the accounting course in Spring 1993, he had a 3.333 GPA [high/low of other instructors were 4.859 and 2.323] with an SET score of 3.38 [lowest of 12 classes and 6 instructors]. But in the masters program he gave 80% As and 20% Bs, with an SET score of 4.73, beating out two senior professors [who had scores of 4.44 and 3.0]. Thus, his pander pollution index was 1.35 [4.73 - 3.38]. Still another instructor in the Spring of 1993 taught a regulated course with a 2.621 GPA and 2.96 SET. In an unregulated course she gave 87.5% As and 12.5% Bs, with a 4.33 SET. She had a 1.37 [4.33 - 2.96] pander pollution index. Certainly an instructor can "buy" good ratings by using a lenient grading system.

Another anecdotal example is shown by the opportunistic behavior of a nontenured, tough-grading instructor who was teaching at a large intercity university, and was receiving SET scores of approximately 4.5 (out of 7). When his department head reprimanded him, he improved his SET scores to 6.3 within one semester. His pander pollution solutions were typical. He instituted a pizza party near the end of the semester and incorporated three pre-tests during the semester. Each pre-test had 200 exam questions which was distributed one week before each exam. The exam was composed of 50 multiple-choice/true-false questions from the pre-test. Thus, by the end of the semester the student knew what his or her high final grade would be before the distribution of SET. There was an optional final exam, but no one took it because there was no pre-test. The moral: when dealing with SET, the nice gals or guys finish first, and most rigorous instructors finish last.

At a major east coast university the person in charge of the SET computer system programmed the system to automatically increase his and a friend's SET scores each semester and decrease another colleague's SET scores. Oh, the games that professors play. If given the chance, instructors will add favorable questionnaires to the system. Indirectly, this occurs by an instructor becoming extremely friendly with students, giving parties, food, etc. 



THE CUSTOMER MYTH

Complaints are often voiced that students are not qualified to evaluate many areas of instructor effectiveness. A senior in high school is not qualified to evaluate high school teachers, yet 4 or 5 months later this same freshman in college has developed the maturity and judgement to evaluate higher education.7 At the same time that our student population is becoming less motivated and more dishonest, we continue to grade inflate. As the student population becomes more dishonest and less motivated [Fishbein, 1993, p. A52 and White, 1993, p. A44], we give their evaluations more and more credibility. Some administrators explain this dichotomy by stating that students are our customers. Students are not our customers--they are our products. We need to improve students' value by educating them. Society and employers are our customers.

Steven M. Cahn, Provost and Vice-President at City University of New York, debunks this ludicrous consumer argument by pointing out that passengers on a airplane do not certify pilots, and patients do not certify physicians. "Those who suffer the consequences of professional malfeasance do not typically possess the requisite knowledge to make informed judgments" [Cahn, 1986, p. 37]. Imagine the chaos if we certified dentists, nurses, CPAs, lawyers, engineers, architects, air conditioning repair people, etc. with questionnaires from customers.

There is a vast difference between customers and products. If the raw material is worse coming into higher education, we need to work even harder to maintain an outflow of qualified students. This improvement can not be done easily as competition for better SET marks causes severe grade inflation and course work deflation. Just as U.S. businesses are attempting to improve the value of their products, we must improve learning. In the long run, producing increasingly less educated students is self-defeating for universities. Soon society will look elsewhere for quality students. 



REGULATED CLASSES

As severe as grade inflation has been, it would be worse without the introduction of regulated classes in many lower level courses. Highly regulated classes reduce the conflict between instructors and students. In regulated classes the course content is the same, there are common exams, and the cut-off scores for the grades are set by the department. The instructor is in the classroom helping the students overcome these hurdles imposed by someone else. Administrators try to restrict the free-market educational system in order to reduce both PPs. In general, SET scores may be much higher in regulated classes than in nonregulated classes because of the reduced conflict between the student and instructor.

For example, Temple's basketball coach John Chaney might have a hard time surviving in a physics class. According to Chaney, "I'm always looking for kids whose heads I can turn. A coach should develop good human beings. Tough love and respect make good human beings." Says Chaney: "Try coming late ... and feel my wrath. I punish them by working them to death" [Blauvelt, 1933, p. C-1 and 2]. Working students hard in today's modern classrooms is a kiss of death. Of course, Chaney is evaluated by how many games his team wins--not by how his students evaluate him on a questionnaire. Student athletic programs may be the last arena in higher education where student motivation is important. 



CERTAIN DEPARTMENTS MORE VULNERABLE

Accounting instructors are more vulnerable to lawsuits as higher education proceeds to grade inflate, course deflate, and produce undereducated graduates. Unless higher education reverses itself, there will be many lawsuits from graduates and parents who spend huge fortunes for mediocre education. When even people as diverse as Gary Trudeau in "Doonesbury" and Rush Limbaugh are poking fun at the mess in higher education, one wonders if we have the backbone to reverse grade inflation and course work deflation. Accounting, law, and education departments should be on the forefront to eliminate summative SET.

Lest we forget, many accounting graduates face a CPA examination after graduation. This CPA exam is an independent, tangible benchmark as to the quality of an accounting degree. Suppose we have students graduating with A's and B's in the various accounting courses, and yet they fail the CPA exam.8 A jury would not be sympathetic to an auditing instructor who gave an A to a student who does poorly on the auditing section of the CPA exam (See Student Directed Lawsuits). Accounting (with the CPA exam), the education department (with the Excet exam to certify graduates as teachers), and law schools (with the Bar exam) will be the first casualties as society and employers attempt to overhaul higher education in the courtroom. 



RECOMMENDATIONS

The simplest and best recommendation is to halt the widespread use of summative evaluation instruments which are used by administrators for control purposes. "Faculty should be primarily responsible for evaluating the teaching performance of colleagues [Accounting Education Charge Commission, 1993, p. 438]. The dangers associated with the use of numerical measures for control purposes (e.g., tenure, promotion, and merit pay decisions) are well known [Merchant, 1985; Lawler and Rhode, 1976; Rideway, 1956]. Single numerical measures are unable to capture all relevant aspects of the behavior subject to control, and the controlled person tries to maximize the measures rather than working toward the intended goal of improving learning [Newton, 1988, p. 3]. If SET are used for administrative control purposes, "rational self-interested instructors" will alter their behavior to improve their SET scores. "Examples of such behavior include lobbying to teach a course where better ratings are generally achieved, making a course less rigorous for students, relaxation of grading standards, or deciding against implementation of innovative instructional techniques" [DeBerg and Wilson, 1990].

A less favorable solution is to give the results only to the instructor, because if numerical measures are given to an administrator, he or she will use it (often on an uneven basis). SET are favored by administrators to minimize time, effort, cost, and legal liability in evaluating instructors. Certainly SET results should not be made public nor should grade distribution information become public. If administrators do not recognize the dysfunctional impact of numerical control measures, how can students be expected to use these measures properly? If status quo is maintained, certainly faculty and staff should be allowed to numerically evaluate administrators and make the results public.

One way to make summative SET more reliable is to require students to sign their names (or social security number). Under the present anonymous system, instructors have no due process for false and libelous statements made by students. If students are allowed to sue instructors for almost anything, students' false and libelous statements that affect an instructor's merit pay, promotion, and tenure should be subject to action in the courtroom. At a minimum, instructors should be allowed to know their accusers. With the steady student moral decline, little faith may be placed upon their evaluations. Academic freedom is involved also where a department or college places heavy reliance on summative SET scores.

Non-summative (or formative) SET are appropriate feedback and may be useful for improving teaching. Non-summative SET instruments contain open-ended questions for comment by students, but do not contain a single question which acts as a surrogate for an effective instructor. Classroom visitation, self-appraisals, video machines, recorded materials, evaluation of teaching materials, and formative SET are more appropriate for administrative control purposes.

Instructors must keep detailed records of their classroom teaching when administrators use summative SET. We do not teach on a level playing field. Remind your department head that you teach in a nonregulated course, a difficult course, or a late evening course. A class with students of lesser ability or less honesty may rate an instructor more poorly than a better class or a more honest class, especially when confronted with tough grading and heavy work [Stumpf and Freedman, 1979]. In one department, A and B students are given to certain professors and the remaining grade-impaired students are given to other instructors. Grade distribution data must be kept if you are a tough grader or cover a great deal of material. SET scores vary between graduate versus undergraduate courses, large versus small classes, homogeneous versus non-homogeneous students, and required versus nonrequired courses. Rigorous instructors must provide administrators with evidence of competing pandering professors engaging in pander pollution.9

I agree with Provost Cahn that "the time is long overdue for professors to return to the proper use of the grading system, and to award students the grades they deserve. In so doing faculty members will be fulfilling one of their most important responsibilities: to provide accurate evaluations," [Cahn, 1986, p. 31]. New instructors should occasionally view the 1973 movie "The Paper Chase," starring law professor Charles W. Kingsfield (played by John Houseman). On the first day of class the bald-headed professor with a bowtie and black bifocal glasses forces a student to stand up and speak loudly. "Fill this room with your intelligence!" Later when explaining the socratic method, Professor Kingsfield states "we do brain surgery here. I train your mind."10 



FUTURE RESEARCH NEEDED

Major research is needed to isolate this pander pollution factor. How large is it? What is good SET management and bad SET management? Penalties should be imposed upon instructors (i.e., pandering professors) who engage in this PP practice (e.g., giving parties for students). Also, research is needed to isolate the revenge factor. How large is it? Should instructors giving high grades in classes have their SET scores deflated? How much higher are SET scores in regulated courses versus nonregulated courses?

The focus of research should be reversed. Do not research how to improve SET, but instead focus on research to determine how to improve student learning. According to sociologist Sid Gilbert at the University of Guelph, we "need to know what produces learning, and what practices and procedures would measure them" [Ford, 1994, p. 6]. Does a rigorous instructor like Professor Kingsfield in The Paper Chase teach more than a wimpy, students-are-customers instructor? Does a nice instructor impart more knowledge than a harsh instructor? The goal in higher education is educating students-- not rewarding pandering professors. For example, consider giving monetary rewards to instructors who give the lowest GPA and survive the intimidation from students. According to Brown [1976], the use of SET by university administrators can lead to rewards going to the most lenient and not to our best instructors.

Research on the following hypotheses (stated in the alternative form) is necessary: 
1. The effect of pander pollution is less in regulated courses. 
2. The effect of pander pollution is more in required, unregulated courses. 
3. Pander pollution is much higher in nonrequired, unregulated courses. 
4. SET has caused grade inflation. 
5. Administrators use SET scores unevenly. 
6. Keeping grade expectations high maximizes SET scores.

Student ratings were first used in the 1920's at the University of Washington. Student-produced SET became popular in the early 1960s, and administrators began using them in the early 1970s. During the 1960s and early `70s "college faculties around the country lowered expectations, abolished examinations, and either discarded grades altogether or nullified them through rampant inflation" [S.M. Cahn, 1986, pp. 30-31]. 



ENDNOTES

1. One Texas dean in 1993 said that "students are the best judge of teaching competence," and a Massachusetts' dean said that "we rely on student ratings more than on any other source of data on teaching" [AACSB, 1993]. 
2. Laws highly regulate financial statements to reduce income manipulation and opportunistic behavior, yet there is no regulation of SET. Most administrators blindly accept them as truth. Instructors have a high incentive to manage SET, even more so than managers have the incentive to enhance earnings [Holthausen, 1990, pp. 83-110]. 
3. Changing to an all true false exam can dramatically improve a class average and therefore SET scores (e.g., each student starts off (on an average) with 50 points). 
4. In 25 years of teaching at six major universities, no administrator has come to my classroom to review my teaching. 
5. The College of Business Administration general guidelines for indicator of excellence in instruction states: "outstanding evaluations of teaching performance over a significant period of time as indexed by standardized surveys...." 
6. Seldin also states: (1) don't let administrators develop the evaluation program and then impose it on the faculty, (2) don't fall into the trap of evaluation overkill, and (3) don't overinterpret small differences in student ratings of professors. 
7. Scholastic aptitude test scores peaked in 1963 and have declined over the past 30 years, and the verbal S.A.T. hitting an all time low several years ago [Sowell, 1994, p. 14]. 
8. One large major public institution in the Southwest (Texas A & M) ranked near the bottom on the last three CPA exams in its state. This university uses the summative SET, places the results in the library, and provides faculty grade distributions to the students. 
9. There is some hope. Cornell's dean of academic affairs, Tom Dyckman, places a value of only one-quarter to one-third to the SET rating [AACSB, 1993, p. 15]. 
10. Professor Kingsfield tears up one student so badly in class that the student throws-up his breakfast afterward. Certainly few professors can teach like this under our current SET-drive reward system. Can you image ending a class with the statement: "Good luck with your exam. You'll need it." 



REFERENCES

Accounting Education Charge Commission [1993] Evaluating and Rewarding Effective Teaching. Issues in Accounting Education (Fall), pp. 436-439. 
AACSB [1993] Faculty Assessment Changes in the Works at B-Schools. American Assembly of Collegiate School of Business (Fall), p. 14. 
Blauvelt, H. [1993] Chaney Also Tutors Players About Life. USA Today (December 15), pp. C-1 and 2. 
Brown, D.L. [1976] Faculty Ratings and Student Grades: A University-Wide Multiple Regression Analysis. Journal of Educational Psychology (Vol. 68 1976), pp. 573-578. 
Bures, A.L., J.J. DeRidder, and H.M. Tong [1990] An Empirical Study of Accounting Faculty Evaluation Systems. The Accounting Educators' Journal (Summer), pp. 68-76. 
Cahn, S.M. [1986] Saints and Scamps: Ethics in Academia. Totowa, NJ: Rowan & Littlefield. 
DeBerg, C.L. and J.R. Wilson [1990] An Empirical Investigation of the Potential Confounding Variables in Student Evaluation of Teaching. Journal of Accounting Education, pp. 37-62. 
Ditts, D.A. [1980] A Statistical Interpretation of Student Evaluation Feedback. Journal of Economic Education (Spring), pp. 10-15. 
Dowell, D.A., and J.A. Neal [1983] The Validity and Accuracy of Student Ratings of Instructions: A Reply to Peter A. Cohen. Journal of Higher Education (July/August), pp. 459-463. 
DuCette, J. and J. Kenney [1982] Do Grading Standards Affect Student Evaluations of Teaching? Some New Evidence on an Old Question. Journal of Education Psychology, pp. 308-314. 
Fishbein, L. [1993] Curbing, Cheating and Restoring Academic Integrity. The Chronicle of Higher Education (December 1), p. A52. 
Ford, C.T. [1994] Universities Take Aim On Performance Measures. University Affairs (February), pp. 6-9. 
Frankel, C. [1968] Education and the Barricades. New York: W.W. Norton & Co. 
Henry, T. [1993] U.S. College System Called a `Prescription for Decline.' Houston Post (December 6), p. A-1. 
Holthausen, R.W. [1990] Accounting Method Choice: Opportunistic Behavior, Efficient Contracting, and Information Perspectives. Journal of Accounting and Economics (January), pp. 207-218. 
Howard, G.S. and S.E. Maxwell [1982] Do Grades Contaminate Student Evaluations of Instruction? Research in Higher Education, pp. 175-188. 
Lawler, E.E. and J.G. Rhode [1976] Information and Control in Organizations Pacific Palisades. Goodyear Publishing. 
Medley, D.M. [1979] The Effectiveness of Teachers. In P.L. Peterson and H.J. Walberg, Eds., Research on Teaching: Concepts, Findings, and Implications. McCutchan Publishing Corp., pp. 11-27. 
Merchant, K.A. [1985] Control in Business Organizations. Boston: Pittman. 
Newton, J.D. [1988] Using Student Evaluation of Teaching in Administrative Control: The Validity Problem. Journal of Accounting Education, p. 4. 
Porcano, T.M. [1984] An Empirical Analysis of Some Factors Affecting Student Performance. Journal of Accounting Education (Fall), pp. 111-126. 
Powell, R.W. [1977] Grades, Learning, and Student Evaluation of Instruction. Research in Higher Education, pp. 193-205. 
Renner, R.R. [1981] Comparing Professors: How Student Ratings Contribute to the Decline in Quality of Higher Education. Phi Delta Kappan (October), pp. 128-131. 
Rideway, V.F. [1956] Dysfunctional Consequences of Performance Measurements. Administrative Science Quarterly (September), pp. 240-247. 
Ryan, J.J., J.A. Anderson, and A.B. Birchler [1980] Student Evaluations: The Faculty Responds. Research in Higher Education (December), pp. 317-333. 
Schipper, K. [1989] Earnings Management. Accounting Horizons (December), pp. 91-102. 
Seldin, P. [1993] Changing Practices in Faculty Evaluation. San Francisco: Jossey-Bass. 
Seldin, P. [1993] The Use and Abuse of Student Ratings of Professors. The Chronicle of Higher Education (June 12), p. A40. 
Sowell, T. [1994] We Suffer the Consequences of `60s Liberalism," AFA Journal (January), p. 14. 
Stumpf, S.A. and R.D. Freedman [1979] Expected Grade Covariation With Student Ratings of Instruction: Individual vs. Class Effects. Journal of Education Psychology, pp. 273-302. 
White, E.M. [1993] Too Many Campuses Want to Sweep Student Plagiarism Under the Rug. The Chronicle of Higher Education (February), p. A44. 
Worthington, A.G. and P.T.P. Wong [1979] Effects of Earned and Assigned Grades on Student Evaluations of an Instructor. Journal of Educational Psychology, pp. 764-775. 
Wright, P., R. Whittington, and G.E. Whittenburg [1984] Student Ratings of Teaching Effectiveness: What the Research Reveals. Journal of Accounting Education (Fall), pp. 5-30. 
Winsor, J.L. [1977] A's, B's, but not C's?: A Comment. Contemporary Education (Winter), pp. 82-84.

 


"A learning - abundant university is healthy and wealthy; a learning - deficient university is unhealthy and poor."
 
Larry Crumbley

Society for a Return to Academic Standards

Higher Grades = Higher Evaluations

 


Last Updated: 14 November 1996