Ethical Blind Spots

Why you’re not as ethical as you think you are.

Ethical Blind Spots

You’re not living up to the ethical standard that you think you are. That’s one of the core messages of “Blind Spots: Why We Fail to Do What’s Right and What to Do About it” (Princeton University Press), a new book by university professors Max H. Bazerman and Ann E. Tenbrunsel, which examines why people overestimate their ability to do what’s right, and why they act unethically without meaning to. “Ethics interventions have failed and will continue to fail because they are predicated on a false assumption: that individuals recognize an ethical dilemma when it is presented to them,” write Bazerman and Tenbrunsel, explaining one of the important implications of their findings.

With the above in mind, I contacted Tenbrunsel by phone to discuss “Blind Spots.” Among other things, we addressed why traditional ethics interventions are inadequate, and why people don’t notice or report the ethical failures of others.
In a nutshell, why aren’t people as ethical as they think they are?
They are blind to the obstacles that prevent them from behaving the way they want to behave—and behaving the way they think they do behave. People predict they are going to behave in a certain fashion—generally in an ethically desirable fashion. But they proceed to make unethical choices, and when they look back those on decisions they reflect on them in a way that is erroneous. They see decisions as more ethical than they really were, or don’t remember the unethical actions. So the biggest obstacle to improving behavior is that people don’t realize the unethicality of their actions.

Why would people want to read a book that tells them they’re not as ethical as they thought?
That’s a great question. We were lucky to have a lot of publishers interested in the book, but one editor made an equally astute comment, which completely haunted me: [It was] “This is great, but if the book is right then no one is going to want to buy it.” But if we don’t alert people to these blind spots then not much else we do is going to have a big effect. The very first thing is to get people to realize they are behaving this way. Once we alert people, then we can start to argue about what the ethical standard should be.

Can this problem be addressed with more ethics training?
I don’t think so, not with current training. I don’t think we have been wont for effort, but a missing component has been the cognitive and social-psychological aspect to ethics. Organizations are spending millions and universities are devoting core requirements to ethics, but ethics training has failed us because it has focused on: When faced with an ethical dilemma, here’s what you should do. What it ignores is that in the real world a supervisor doesn’t come to you and say: “Here’s an ethical problem: I’d like you to solve it for me.” It’s integrated with financial aspects, business aspects and sales aspects, and the ethics often get faded.

This may explain why ethics programs seem to have limited effect. But in the book you go further, noting that they may even promote unethical behavior.
If you look at what has served as the foundation of these programs—codes of conduct, having a hotline, ombudsmen—they follow along the lines of what we think of as formal programs: Here’s the program, here’s the checklist, and as long as you adhere to the checklist, things should be fine.

But informal communications are significantly more impactful than formal programs. An organization might say performance ratings are based on efficiency, effectiveness and integrity, but if the biggest schmuck in the organization is getting promoted because he has the largest sales (independent of how he got those sales), that sends an informal signal that rewards are based on sales. Formal programs don’t address the real rewards and sanctions that individuals feel, so they sometimes create the reverse effect.

Why do sanctions often encourage the unethical behavior they are supposed to discourage?
The biggest problem with sanctioning systems is that they change the way that the decision-maker views the decision. There was a study done on a day-care center that was having problems with parents picking up their children late. In order to change that behavior the day-care started fining parents, which did change the parents’ behavior, but in the opposite way. Parents began picking up their children later rather than earlier.

David Messick [professor of management & organization at Northwestern University] and I did a series of studies and we found the same thing. We found that a weak sanctioning system—which we would argue most sanctioning systems are—increased the undesirable behavior. Without a sanctioning system the majority of people viewed their decision as an ethical decision. When you put in a sanctioning system the large majority of people view their decision as a business decision. So simply putting in a sanctioning system leads to ethical fading [where we no longer see the ethical dimensions of a decision].

The secondary problem with sanctioning systems is that people don’t like to be controlled, and they will go to great lengths to work around control systems. And even when the sanctioning system is removed, the undesirable behavior stays in place. The day-care study found that even after they removed the fine, there were more late pickups than before. So you have to be careful when you put sanctions in, because people become ruled by “What’s the cost?” rather than “What is the right thing to do?”

Why don’t we notice the unethical behavior of others?
It’s because we are motivated not to see it. The human brain is fantastic at motivating itself towards goals. When I’m motivated toward a goal I may be motivated not to see anything others do. [See the Ulrich Neisser experiment in which Cornell students were asked to watch a video of three players passing basketballs and instructed to count passes; four of five students failed to notice a woman who unexpectedly walked through the basketball court carrying an open umbrella.] It’s kind of unconscious motivation. I don’t realize what I’m not seeing. Also, I don’t see your unethical behavior as long as it supports my own goal-oriented outcome. If I was completely neutral—or your goals were the opposite of mine—I would see it. As long as there is some overlap between your goals and mine, I won’t see the behavior.

Like Bernard Madoff’s investors?
A lot of the people who supported Madoff’s Ponzi scheme had no idea that they were supporting unethical behavior. But many were aware and hid their unethicality from themselves. A sad example we give in the book is Rene-Thierry Magon de la Villehuchet, who not only invested all his own money with Madoff, but all of his family’s money. After Madoff surrendered to authorities he killed himself in his New York office.

Why do people place too much emphasis on short-term considerations and ignore future consequences?
Part of the problem lies in the uncertainty of the future. The more uncertainty there is, the more people make decisions that are in their self-interest. They take the part of the uncertainty that supports their self-interest and ignore the part that doesn’t. You see it a lot in the global warming debate. There’s a whole movement around creating that uncertainty, because that uncertainty creates wiggle room and allows for self-interested behavior. But the uncertainty goes both ways; things might be substantially worse than predicted.

http://www.blindspots-ethics.com

This article published June 25, 2011.