It always seemed like a tricky question, “should we adopt a rights-based or a utilitarian ethics?” Because it implies both ease-of-use questions and adherence-to-reality questions. Ethics, as an academic discipline, kind of sucks because strong adherence to principles (greatest good for the greatest number) leads you to unpalatable conclusions (ginger genocide). And it seems like the principle doesn’t have much more weight than the moral intuitions it unseats. And then it seems like we’re just trying to appease conflicting moral intuitions, which seems a lot like we’re just trying to justify our way of life, ad hoc, rather than sitting down and really figuring out what we ought to be doing.
Maybe there aren’t any moral facts that could possibly propel such an investigation, pace Singer, and the best we can shoot for is consistency. So given that some moral intuitions seem inconsistent with each other (e.g., A. it’s wrong to let kid drown, B. it’s ok to let people in Africa die of starvation/disease) how do we know which intuition we have to change/abandon? Probably the one that would do the least damage to our ethical-epistemological web, given that preservation of the web (consistency is implied in that notion) is of the utmost importance, a priori. Another worry is that there are two internally consistent, self-supporting, attractive ethical-belief (belief, not knowledge, because we gave up truth with the moral facts concession) webs out there, and we’ll have really crappy ways of choosing between the two. I suppose you don’t have to worry about this until you come to it, and there’s nothing wrong with the assumption that there’s only going to be one consistent ethical-belief web, as long as you grant that it can be disproven.
What moral intuitions do we begin with, then?
Equality? Equality of what? Moral agents, I’d assume. What constitutes a moral agent? Can’t you have several levels, perhaps a slope of moral significance and demand appropriate treatment for each tier/point on the slope. Upon further reflection, the tier/slope distinction matters a lot. It seems that respect for human dignity is a threshold system, not a slopy one. Once you are self-conscious, you’re in. I suppose some (ostensibly) morally significant characteristics are all-or-nothing… awareness of oneself as an actor, or of the passage of time, don’t seem like things that come in degrees, like the ability to experience pain/pleasure, or the ability to remember things, be trained to behave in a certain way, or the ability to use language. The all-or-nothing qualities seem much more important and fundamental to the moral system than the experiential/cognitive gray areas I’ve listed. Why? Because you can appreciate justice and fairness and the like even if you’re stupid and paralyzed. But if you can’t understand the passage of time or if you aren’t aware of your identity and agency, then there’s no foundation upon which to build any sort of moral system. (Possible counter to that: Buddhism, no-self nirvana. That certainly brings up the possibility that this whole “getting to the basics” business is just parotting the fundamental values of the culture in which I was raised, which isn’t something I’d like to be doing, but maybe it’s inevitible unless I work to steep myself in other cultures’ philosophies, a task toward which I’ve heretofar put absolutely no effort.)
One problem we dig up, if we accede to that point, is that we’re stuck with the task of defining the distinction, and deciding what to do with A) things that were self-conscious at one point but aren’t anymore, or B) things that, given time, or given time and specific environmental conditions, we have good reason to believe the things will become self-conscious.