The equation of Logicomorality

I received an interesting comment from Marius:

Well, so far it looks like there is less and less "morality" and more and more "logico". So far it is clear, that Logicomorality (LM) is interested only in collecting facts and comparing objects and actions only according to the quantity of collected information. From the first look it would look well, if you mention robots together, but under further scrutiny there are few points unclear. First morality defines code of conduct according to the current traditions and recommend or disapproves actions, where LM just count total information and decide on actions without comparing them. Other big problem - LM can't decide on action because there is no "threshold of information" for action to be recommended. For example, what's the if I know that I can get jail-time for stealing a car, but I get money in black market. Would LM recommend it? If I know that I'd go to jail for 5 to 10 years ? Or if I know that I would get 15k dollars in the black market ? When do I know whether I act well according to LM. Knowing at least some of the factors is unavoidable, however (as you already mentioned before) no one can know all factors. Other point: imagine we are talking about computers. If they know that in case they would start nuclear war people would die, they would know all the symptoms of radiation burn, good predictions of mortality, etc., and they would start the war, is it good according to LM? The would be COMPLETELY THE SAME if you would evaluate decision of the computer with the same knowledge not to start the war. In other words, what are recommendations for code of conduct? Going back to humans, you can't say that knowing more is better. Most of the information is not properly weighted, other information is being ignored at all even if it is known, hence you have to deal with ignorance and information weighting. Also you can't compare information bitwise, because facts with seemingly exact information could have completely different weight in decision.

I extracted three main ideas:

  • Logicomorality can't tell when there is enough information for you to commit an action ("Threshold of Information")
  • Logicomorality can't calculate the weight (quality) of different information, for example: I know one apple is from Netherlands, another was brought here two days ago - which one should I choose?
  • Logicomorality is stuck when there is equal amount of information for each action

These questions are connected to the very core of morality. Marius is stating, that if you follow the guidelines described in Logicomorality, you can't always answer the question "What should I do?", "How should I do it?", "When should I do it?. I can even add some myself, like "Where should I do it?", "How often should I do it?".

All I can say, is that Logicomorality doesn't tell you what and when to do - YOU DECIDE THAT (the whole Logicomorality is valid if and only if we assume that human beings have free will). You also decide which information looks more important to you: there are no universal laws stating that one cow equals three chickens. If we take a look at the last example from Marius's comment, where he talks about computers starting a nuclear war, Logicomorality could only state that both actions are equally good (altough I doubt, that nuclear explosion is as predictable as no explosion at all).

Logicomorality is like an equation where you put numbers and you get an answer, it doesn't force you to do good, it simply gives you result, which you can compare with the one received from other equations.