Morality – Timestamps
00:00 – Intro
01:30 – Factoids / moral dilemmas
22:25 – Moral Theory
25:39 – Utilitarianism ethics
38:53 – Deontological ethics and Kant
46:12 – Mitch’s man on a podium
1:00:45 – Is morality nature vs nurture?
1:08:05 – Ending and Outro
You can listen to The Conduct Science Podcast by using the player above, searching for “The Conduct Science Podcast” on any place you listen to your podcasts, using any of the links below or you can download it HERE!This week on The Conduct Science Podcast, Tom and Mitch take a close up look at morality. Debating some moral dilemmas, is it nature or nurture and lots of the main moral theories presented in today’s world and throughout history, from Kant and deontological theory to Mill and utilitarianism. What do they try to achieve, is there a best one and where do our hosts stand? Music by: Joakim Karud – https://soundcloud.com/joakimkarud.
Website Twitter FacebookJTNDaWZyYW1lJTIwdGl0bGUlM0QlMjJNb3JhbGl0eSUyMiUyMHNyYyUzRCUyMmh0dHBzJTNBJTJGJTJGd3d3LnBvZGJlYW4uY29tJTJGbWVkaWElMkZwbGF5ZXIlMkZ1NzlhNS1iYmE5MzElM0Zmcm9tJTNEeWlpYWRtaW4lMjZkb3dubG9hZCUzRDElMjZ2ZXJzaW9uJTNEMSUyNnNraW4lM0QzJTI2YnRuLXNraW4lM0QxMDElMjZhdXRvJTNEMCUyNnNoYXJlJTNEMSUyNmZvbnRzJTNESGVsdmV0aWNhJTI2ZG93bmxvYWQlM0QxJTI2cnRsJTNEMCUyNnBiYWQlM0QxJTIyJTIwaGVpZ2h0JTNEJTIyMTIyJTIyJTIwd2lkdGglM0QlMjIxMDAlMjUlMjIlMjBzdHlsZSUzRCUyMmJvcmRlciUzQSUyMG5vbmUlM0IlMjIlMjBzY3JvbGxpbmclM0QlMjJubyUyMiUyMGRhdGEtbmFtZSUzRCUyMnBiLWlmcmFtZS1wbGF5ZXIlMjIlM0UlM0MlMkZpZnJhbWUlM0U=
Thanks for Listening!
Feel free to share your thoughts on our Twitter here: @ConductScience
Use #ConductScience on twitter to:
- Suggest a guest
- Suggest a topic
- Ask a question we can answer on the show
- Or to just get in touch!
Learn about our products:
- Visit our Lab
- Find Resources for your lab including protocols, methodologies, and more.
- Are you a scientist with a new device that you would like tech transferred? Join our Creators Program
Tom: Hello Ladies and gentlemen and welcome to the conduct science podcast where this week I spent ages coming up for a related pun, but in the end I realized I Kant do it. I hope at least one person got that. If you want to check out all the , if you want to check out all the latest goings on you can head to conductscience.com you can find it on Facebook and Twitter by searching @ConductScience. If you want to get in touch, suggest guests suggest a topic or anything, really just use the #ConductScience. I am your host, Tom Jenks, and as usual, joining me today, stopping us both getting out of Kant-rol. It’s Mitchell Gatting.
Mitch: Hello. Can we, can we stop with the, I Kant deal with it. Uh,.
Tom: and I think we’ve lost half our listener base already. Today’s topic, if you hadn’t guessed, is morality or ethics. On some level now normally we prepare some kind of factoids to go at the beginning of the show, but I, I don’t know what kind of facts you can put into morality. So, uh, you know, it’s kind of opinion based. So, well, what are we going to do instead is kind of just go through some maybe moral issues. Do you want to start with one Mitch?
Mitch: Some moral dilemmas. The ones I picked out his three that I picked out. They’re called Kohlberg dilemmas. The named after Lawrence Kohlberg who studied moral development and proposed the theory that moral thinking goes in status. Heavily used by Harvard. Okay, I’ll kick the first one off.
Tom: Is it morally okay if you to kick it?
Mitch: I’m probably not. No, I shall. I shall begin announcing the first moral dilemma. Joe is a 14 year old boy who wanted to go to camp very much. His father promised him he could go if he saved up the money for himself. So Joe worked very hard at his paper route and saved up $40 it cost to go to cam. Now little more besides that, but just before camp was going to start, his father changed his mind. Some of his friends decided to go to a special fishing trip. This is the fathers friends decided to go to the stuffs, this special fishing trip and his father was short of money. It would cost. So he told Joe to give him the money he’d saved for the paper routes. Joe Didn’t want to get want to give up going to camp. So he thinks of refusing to give his father the money. Where do you start on that? Because for me, that’s quite cut and dry. If I was the child.
Tom: Yeah. Don’t do it by, yeah, don’t, don’t give it to you. I mean, unfortunately we were thinking of this in adult mindset as a child, you know, you’re very easily swayed. But uh, I mean I guess this would be the father’s moral issue, wouldn’t it? Is to not make your child give up money. He’s spent the summer earning to go do something he wants.
Mitch: So I think three issues in this one. And the first issue is him changing his mind after just before camps start first issue whether the father should have done that. The second is, uh, asking him for the money. And the third is the child refusing the money. Refute. He thinks he should refuse to give him the money, which I would say he’s in his right to, not, to refuse.
Tom: 100% but again, I guess you at, you enter a realm slightly of the a parent child dynamic and how that might influence the situation. But I think I definitely stand on the side of, you know, don’t take the money off the kid. And I think that was a pretty, as you say, cut and dry scenario,.
Mitch: Especially as he’s saved up, especially to do something.
Tom: Yeah. Which he was told to do in the first place. And then, as you said, just before the event, it’s like, ah, no, it’s mine now. That screams to me like not abusive household, but you know what I mean, like. That kind of is a, yeah, exactly. Yeah. Yeah. And I agree with you on that one.
Mitch: Okay. Um, the next one’s not so cut and dry. This one gets used a lot in moral dilemmas, so you may have heard it before. In Europe, a woman is near death from a special kind of cancer. There’s one drug that one drug that doctors might save her. It was in the form of radium that a druggist, oh, I’m not sure a druggist is the word here, but this is what used a druggist in the same town had recently discovered the drug was expensive to make, but the druggist was charging 10 times what the drug costs to make. He paid $400 for the radium and charged 4,000 for a small dose of the drug. The sick woman’s husband, Heinz went to everyone he knew to borrow money and tried every, every legal means, but could only get together about $2,000, which of which is half of the cost. He told the druggist that his wife is dying and asked him to sell it cheaper or let him pay later. The druggist said no, I discovered the drug and I’m going to make money from it. So having tried every legal means Heinz gets desperate and considers breaking into the man’s store to steal the drugs for his wife, what do you think of this?
Tom: Well there are multiple points of dilemma-ness here. That’s definitely not a word. So I guess that the first one, the first one is obviously yeah. Points of contention. The first one obviously being getting the drug and selling it for a much higher price, which we know goes on in today’s world, which is disgusting.
Mitch: A very big reflection of how America’s drug system works currently.
Tom: Yeah. I think, is that the Insulin right?
Mitch: Yeah. So it’s to do with um, costs and how much things are charged and that’s like what it is. So the cost is 400 but the charge is to four thousand.
Tom: Yeah. That’s abhorently wrong in its, in its own right. I think. Who Was it? I can’t remember who it was to Benjamin Franklin who vowed to never patent anything because all knowledge should be free or something like that. I know it’s slightly different from inventions to medical discoveries. I think you know, in that sense it should be as somewhere along that line. The second point then, would it be him refusing to give it to the man whose wife’s dying a lower price or let him pay it back later? Which I guess morally you should let him do. And then obviously the third point is the man of wanting to go in and steal it, which of course is wrong, but in the event of saving someone’s life, you would say is less wrong?
Mitch: Yes. So how utilitarian of you Tom,.
Tom: Ah indeed. But if you look at most moral theories, they all at some point I think come to the underlying kind of theory in a way that your benefiting human wellbeing. Right.
Mitch: A few of them. Yes.
Tom: So the kind of no matter what maybe you think you are aligned with you, you end up benefiting human wellbeing. If you ask yourself what benefits have the most or not even the most, just even slightly on any of those points you would come to that.
Mitch: I would say deontological arguments and series of very based on and justice or fairness based on society. But I would say consequentialism is very much based on individuals which we’ll come onto in the future. And then those are the two like two sides of the coins. So as society, if he’s breaking into this man’s store, he’s breaking a social construct of a law that we’ve put forwards to define how moral we are as people to break into the store. So that’s like the society based, but from an individual base he is doing one small wrong breaking into the store and stealing the man to save a human’s life.
Tom: Even with the deontological side of things there, if you’ve got two options or three options is to let her die or two options I guess, or to break in and steal something to save her his wife even being deontological, wouldn’t that still be the correct thing to do?
Mitch: Nope. Because the ends do not just to find the means. So if you’re breaking the law to do something, even though you’re saving your life, it doesn’t justify it.
Tom: Okay. Yeah. Okay. That makes sense. But laws aren’t universally moral at the same time are they? I think that’s something we can come on to later before we get ahead of ourselves. I had one is called, it’s more of a problem than a dilemma.
Tom: Oh, we’re not finished. Sorry.
Mitch: What would you do? You didn’t answer me.
Tom: Oh, what would i? Oh, mate, I’m breaking in.
Mitch: Oh, you breaking in?
Tom: Hundred percent. I’m getting in there and I’m taking that.
Mitch: I would, I would also break in then leave $400 on like a counter and then be like, yeah, f%$@ you mate, I’m saving my wife.
Tom: Yeah. I mean, well, what can you do at that point? What can you expect if it’s, yeah. Uh, I don’t, I wouldn’t have how I started that and yeah, maybe it is utilitarianism or consequentialism, but I mean, in a situation where you’re not picking someone’s life over another, you’re picking between saving or not saving, surely you should pick or I feel like I should pick saving anyway.
Mitch: Then if you don’t save and then your wife dies, do you reckon that there would be an argument that uh, the other person caused manslaughter? Like the, the, the druggist.
Tom: Legally no, but morally yes. Because if that were the case, legally pharmaceutical companies wouldn’t exist in America today.
Mitch: That’s why I was trying to aim this .
Tom: They wouldn’t be, it wouldn’t be a thing in America today if that was the case, which maybe it should be. I mean, there should be maybe like a legal cap on the percentage you can make on a drug.
Mitch: Yes. I definitely think that should be the case.
Tom: But I was watching designated survivor, uh, the other night and beginning of season two it’s not massive spoiler. Basically it’s, someone says that lots of years ago she saved her husband through an illegal means. I don’t want to spoil it, but I think you know what I’m thinking about there. So she had committed an illegal act to save her husband that allowed them to have, you know, 15 more years together and she said she’d do it again. I think I kind of have to fall in line with that way of thinking.I had one. Have you heard about the runaway train problem?
Mitch: Oh yeah. Um, yes I have. There’s many I like this bridge problem because, so some of the things that he spawned. So slight tangent, we’ll come back to it. Um, there is a game about it that just got released. Um, is it cyanide and happiness? Do you know those guys?
Tom: Yeah, the little comic strip guys.
Mitch: Yeah. They created a game and you have to make the choice when it comes down to track and you get, there’s like buffs and negatives. Yeah. There’s also the double track drifting when think you, he takes both tracks. The train goes sideways and gets both. Explain what it is first.
Tom: Okay. So for those of you who haven’t heard of this, I don’t think I had before I started researching this topic. Imagine there’s like a runaway train car coming down a track and you are at the switches. Imagine on the track that the train is coming down. There are five workers and they don’t see the train coming, but the track splits off so you could move the switch and the train would go onto another track saving the five workers. But on this new track there’s one worker and they haven’t seen it either. So you have to make that choice and that situation to say to kill one person to save five and would you do it?
Mitch: Yeah, there’s a lot. There’s a, it seems very cut and dry when you think about the the switch. But what I like about this is that in the dilemma, it doesn’t actually state that you have to touch the track. So a lot of people see it as you do. You switch the track or not. But you then you’ve also got the opportunity to be like, okay well I’m not touching the track and not making a choice to where does that lie?
Tom: What do you mean if you don’t touch the switch?
Mitch: Yeah, so say that it is like you are in distance of the switch. If you make that choice to save the people or do you pick the more with more or less? The other argument the variant is there’s like an old man or one side and a pregnant lady on the other. What switch do you make because is there a moral value between life? That makes sense?
Tom: Yeah, I guess that depends which kind of ethics would come down. Cause if you are following Deontology, did I say that correctly? Deontological theory, you would say you would kill the old man because whilst they both have equal value in life, one woman is pregnant so she’s carrying a second value of life. So it would essentially be two against one in that scenario. But they did a study that asking people, and they did it in VR as well as in virtual reality, asking people what they would do or getting people to make this choice. I don’t think 90% of people switch the track to kill the one and saved the five, but then they did it with the fat man variation. Do you know this? So for people who don’t, if you are, you are in the same position, but instead of next to the, the switch of the track, it was just one track and you’re on top of a bridge. You’ve got a fat man next to you. You could push him off, which would slow the trolley down and stop killing the five workers. But you’d kill that one man. And only 10% of people I think, or a much lower percentage of people actually pushed the man.
Mitch: This is to do with like, um, how close you deem yourself to the, the moral choice. Because even though they are exactly the same, they’re, they’re, they’re exactly the same. Your choosing to kill a man say, well say yeah, save. Just choosing to save five people, but you’re also choosing to kill a man. But it’s just how like what’s it called? Cognitive dissonance.
Tom: Emotional detachment.
Mitch: Yeah. Detachment from pulling leave it till pushing a guy in the back is, yeah. Great.
Tom: Yeah. So that was an interesting way to look at it. And people, a lot of psychologists and philosophers I think said this has no value. This problem by this dilemma has no value because it’s so unlikely for someone to be in that situation. That testing for it kind of makes no sense.
Mitch: Yeah. I think it also, reason this has no value. It’s very obvious. Even if you had someone that was immoral or it’s not like psychopathic or, what am I thinking? So sociopathic.
Tom: I think psychopathic and sociopathic lack theory, ability to see moral issues.
Mitch: Yeah. But they, that would, they would obviously be able to gauge it and be able to get passed that test. So if you’re using it as a benchmark for morality, it doesn’t really prove anything cause you can go, okay well that’s five. that’s one and we go for the one.
Tom: I think that’s what people tended to agree with. But now they’re realizing it actually may have use. With for example, self driving cars. You have to teach a car to make a decision. It can see it’s going to cause a small accident or a big accident and you wanted to choose to cause the small one. So with self driving cars and artificial intelligence tests like this are becoming useful. Cause how do you teach a car to make in quote marks in massive quote marks the right decision. You know?
Mitch: How do we teach cars to be moral?
Tom: Yeah, that’s it. Yeah. So essentially they are putting programs or they are simulating programs to go through these kinds of tests to build up a, I don’t want to say morality, but morality. Do you know what I mean? Like A, a decision making process to have a right or wrong answer. But yeah, that’s all I have for the dilemmas.
Mitch: My last one is quite a thing that I see in my field as a cyber security practitioner it’s called the information access dilemma. So Tony, a data analyst from a major casino is working after normal business hours to finish an important project. He realizes that he is missing data that should’ve been sent from his coworker. Robert, Tony had infant inadvertently observed Robert typing in his password several days ago, decides to log into Robert’s computer and re-send himself that data. Upon doing so, Tony sees an open email regarding gambling bets Robert we’re placed over the last several days with a local sports book. All employees of the casino offer bidden to engage in gambling activities to avoid any hint of conflict of interest. Tony knows he should report this, but he would have to admit violating the company’s information, technological regulations by logging into Rob’s computer. He warns Robert to his batting. He would also reveal the source of his information. What does Tony do in this situation?
Tom: I don’t know. Okay. This one you’ve got me. I don’t know because it’s, if it were me, if I were Tony, it would be a choice between keeping your job by not telling anyone or getting yourself sacked potentially as well as the other person. So at that point, I guess if we’re approaching it from the moral standpoint, it’s actually, I don’t even know from the moral standpoint because depending which moral theory you take, you are worsening yours and your whole family who may rely on that job and other people’s wellbeing.
Mitch: This is where I came into the thought processes like it’s not, you have to imagine the knock on consequences, which is eating as heroin as always as always seems to come, come back to cause deontological, I’d be like, wow, it was morally bad for him to log into his computer in the first place and then not, and then lying, which is intrinsically bad. You could argue be what he leads the same. There’s an argument where lying is the same as withholding the truth. Some people make that argument. Personally I don’t. So technically he’s lying if he, if you do take that form of approach. Um, so he’s lying, which is intrinsically bad if he, if he doesn’t say anything. So got a bit of a moral conundrum here, but then utilitarian. It may says, well if he gets fired, if he’s like defensive, he’s got family, is he feeding anybody? Even if he’s by himself when he goes to get another job, if not, he’s putting himself in a really bad situation.
Tom: Even if you took virtue theory. So you know, Aristotle’s kind of virtue theory is any given situation, the moral thing to do is a really good or virtuous person would do. Even in that situation, I don’t know what a good or virtuous person would do.
Mitch: Yeah. So the virtue theory is all about developing a good, good habits, isn’t it for a character through virtues? It depends what’s what. At what point you brought in the virtue theory. Cause if he, if he brought it into the point where he was logging into the PC, then a good habit would be to not break people’s confidentiality. That that would be the first situation it would stop there.
Tom: To stop this entirely. Would just not do the first thing. I don’t think I would do the first thing. If it were me and that situation, I don’t think I ever would have logged on. I would have just been like, oh well I can do this work tomorrow.
Mitch: Yeah, it says he’s working off the business hours is an important project. You don’t know the extent of the important project. It will he get fired for not completing it. But even then if he won’t get fired for completing it because it’s not his fault, that isn’t the data. It’s Tony’s now. Robert’s fault that he didn’t get the data, so relies on him. But if he doesn’t, if he then says that, does he place the blame and Robert is Robert then get fired. So in breaking the confidentiality, is he actually saving where Roberts back?
Tom: I think you would take it up with Robert Maybe, but even then you wouldn’t want to because maybe then Robert got turned around and be like oh Tony logged into my computer.
Mitch: Yeh but then you’re in like a blackmail situation. Both because they both got with dot on each other. So I think what in reality what would happen is that Tony would go to Robert and be like, oh, I’m really sorry I saw your password. I typed in because of these reasons. I saw these. I’m not going to say anything because I was doing something that is against company policy, but you’re also doing something against company policy. I’m going to stop doing that. I advise that you stop doing yours and leave it at that. Which is the humane thing to do. I think most humans probably do that, but is that the moral thing to do because you’re both lying to your employer over policies that you’ve both broken.
Tom: True but then, I think people always have a bigger detachment to morality and employers than morality and other people like colleagues. Do you know what I mean? Like I dunno, because there’s something kind of, especially like Tesco, you’ve got your managers walking around and they don’t pay any attention or anything. You’re like, well, I’ll do something to help out my colleagues here on the shop floor with me, but my boss is just kind of walking around. Maybe less so because they have that kind of corporate image. Well that was the, the thing with Tesco anyway. But that was a black hole.
Mitch: We have both been past the a what’s it sphere of influence, what’s it called?
Tom: Event Horizon.
Mitch: The amount of horizon. Fun Times. So I thought that was quite good one. Cause it’s quite cause all, normally the old dynamics that you get are old dilemmas. They are, have been around for awhile, but this new one is very relevant to sort of people’s jobs now.
Tom: Yeah, I like that that one. . And I hadn’t come across and that’s good. So I was thinking we can just kind of maybe skim over some different types of moral theory. As we’ve embarked on this project and we have established ourselves as the world leaders on philosophy already. In the first 20 minutes of this. So everything we say from this point on is truth and you can listen to us.
Mitch: Yeah. And as we’ve already mentioned about three or four different moral theories, we can now expand on them and what they are.
Tom: Yeah. So before we kind of jump into that, I found something on a, I think it was a ted talk or something and or, it was a long week that some people claim that morality is an illusion. Okay. And I think through studying I did a lot of Kant in this past week. Hence the puns at the beginning. Now that also sounds very wrong. But yeah, so I was studying a lot of Kant and what I came across is that him and other philosophers, some people claim that morality is an illusion, but obviously there is some point in all our lives where we get into a situation and we ask ourselves, is this the right thing to do? And then everyone experiences that I hope at least a few times in their life. So we act upon this kind of unknown force and that’s what we call morality. So if you look at it like that, we can, maybe morality is a product of sentience or society and is definitely shaped by our religious history. And I guess even evolutionary when you think about it, it doesn’t serve well for a species to kill every other member willy nilly. Do you know what I mean? So I think there is even maybe a, yeah, an evolutionary morality concept. So these other these philosophers aren’t trying to convince you that it exists. I think it does. They’re merely trying to provide a way to understand it in like an objective way.
Mitch: Yeah. Though the thing that the I found was the people that are set like morality is a myth. It kind of was like morality is a myth, but there’s these social constructs that we’ve got that we act in a certain way. And in my head I’m just like, so yeah, those social contracts in the way we act, we just giving a name and calling it morality. Yeah.
Tom: Yeah. 100%.
Mitch: And they’re like, oh, it’s a myth. I’m like, oh, like I, I got, uh, I got some, uh, some of these pasta sheets got some mince, uh, and I layered it, but it’s not Lasagna, right? Because it’s, it’s chilly and pasta and white sauce. It’s, it’s a stacked, but it’s not lasagna.
Tom: I think that that’s, I think that’s exactly right. To be honest. I think these are also the same kind of people who may be, say free will doesn’t exist. Well maybe even further down that road now that we have any proof either way yet. But I think as they are trying to just understand morality because it is a thing, uh, subconsciously a lot of the time I guess we’ll try and just expose that through these different moral theories.
Mitch: We should have done morality or free and the free will illusion that would’ve been a lasted a long time.
Tom: I think it, did we touch on it a little bit?
Mitch: What free will?
Tom: In the paradox episode.
Mitch: Oh, little bit. That was cause of the boot strap paradox and whether that was it.
Tom: Yeah. So one that we’ve touched on a little bit is utilitarianism. Yeah. That one. Um, so what, why don’t we start on that one? We mentioned it a few times to, so this comes from John Stuart Mill, I believe, or he’s one of the most influential, there’s a bunch of them. He’s the name that I came across a lot. And uh, I guess the point of this, the point of utilitarianism is to maximize the amount of happiness that we produce from every action. And crucially, maximizing human welfare is the only thing that determines the rightness of our actions.
Mitch: Um, a little bit. Yeah. So the actions are morally right if and only if they maximize the good good in parentheses because good can be maximizing pleasure, maximizing like wellness. So that the term good is a lot of different descriptors. And because of the different descriptors you get different subsets or if you to terrorism and utilitarianism is a subset of consequentialism.
Tom: Yeah? I thought it was the other way around.
Tom: No? Okay.
Mitch: The most most common form of consequentialism is utilitarianism.
Tom: Uh, okay. Cause I’d come across hedonistic act utilitarianism. Yes. As a form of consequentialism, which I thought was a form of utilitarianism.
Mitch: Nope, other way round. Consequencialism, utilitarianism and Hellenistic utilitarianism.
Tom: Okay. And hedonistic being the point where you realize that the the good in that sense is pleasure. Y.
Mitch: eah. So it’s the rightness of our actions. I just haven’t solely by the basis of the consequences of pleasure, maximizing pleasure, minimizing pain, which is terrible, is no good that one. But I personally don’t think that anybody should abide by hedonistic utilitarianism.
Tom: I mean in some situations it works and I think in two explain you to the Tarion ism it’s a good one to go by. It makes it more that it’s easier for people to understand. If he used that as an example then extrapolate back.
Mitch: Uh, yeah I can see people getting the wrong end of the stick though. If you started with that one though, cause they would see, utilitarianism as mainly just pleasure or pain, not just maximizing the good.
Tom: That’s true. So I guess utilitarianism in that sense is once extrapolate back from hedonistic? Generally? The most general form I guess would be maximizing welfare or the wellbeing of other individuals.
Mitch: Yes. So after hedonistic I’ve got um, preference utilitarianism.
Tom: Okay. I hadn’t come across that one.
Mitch: It takes into account not just pleasures, but the satisfaction of any preference.
Tom: the satisfaction of the act taker or the receiver?
Mitch: Either, I think it’s all so any preference within the paradigm of the situation that you’re using it.
Tom: Okay. Well that makes sense in, again, this is the problem with I guess all these theories, they’re going to make sense in some scenarios, aren’t they? And I think something I’ll want to come back to later is it’s going to be a mix. You never gonna stick with one and be like, this is the way, because I think utilitarianism, especially with the hedonistic, people say if every single action, and I guess even this form as well, sorry, what was it called?
Tom: Preference. Even with this one, people will say if every single time you act you are maximizing the pleasure or the good or the preferences that all the people are getting out of every situation. If you’re maximizing it every single time, it’s too demanding. I think that’s the problem with utilitarianism is is too often too demanding to be realistic.
Mitch: Are we already going into the problems with it are we?
Tom: Well…while we’re here, that was one of the ones I came across was it’s too demanding because if we are going to maximize it in every single option, most of the choices that everyone makes is not up to standard and is thus immoral. So that was one of the problems I came across with these kind of theories.
Mitch: Coupled with the demands of it being excessive is people have argued that it places the burden, the too large burden on the individual itself. Because they don’t have anything to reference. They are purely deciding on a moral situation to save the person based on themselves and their choice to make the greatest good. You could argue is also subjective. Very subjective. The burden on the individual is very large. Because it’s not like they have a like some other moral subset which has like rules set out that they can go like down and be like, okay, is this good? No. Is it, is it serving a virtue? No. They solely have to make the decision based on their own thinking whether they, uh, it produces more good or not. And that that argues that is if someone is more, uh, intelligent and can perceive more outcomes because it’s based on the outcome. Um, and sometimes you can’t predict the outcome, which is another issue cause you might not be able to predict the consequence. But if someone was more intelligent, does that make them more moral because they can identify more outcomes so they can perceive a greater good?
Tom: Hmm. I hadn’t come across that line of thinking. That’s a very good way to look at it. But I guess you can’t judge someone as being more moral. You might be able to by the actions that they have taken in a certain circumstance. But inside, if we presume morality as a scale, which I guess like everything else it is how could you, there’s no way to determine that one person is, has a better potential of being moral. But I guess at the same time you could, because if you imagine someone who’s been out in prison, that whole in and out their whole life or something like that, you would expect that person to be less moral than someone who’s never been to prison.
Mitch: Judging a book by its cover. Yeah. And then you’ve also got the…
Tom: Yeah very crude example.
Mitch: Was it like another issue of as it’s defined by culture, surely each culture would have a different morality scale.
Tom: Yeah, they do. And we see that when you look around the world and we know that between affluent countries and non-affluent countries, there is completely different. Construct of morality.
Mitch: Different religious countries also have very different morality scales going back.
Tom: Yeah, go, go back. Go back.
Mitch: Reeling it back to the subsets of utilitarianism. We’ll get through this. The one that you mentioned earlier, I think you mentioned it, hedonistic act utilitarianism. What’s that one about?
Tom: Yeah, so hedonistict act. It’s one of the most common types of consequentialism. And it’s basically says that the most moral act maximizes pleasure and they look at it, I think two ways. They break down consequentialism, utilitarianism in two ways. One is what is the good and two is how are you doing that? So in hedonistic act utilitarianism, the good is the pleasure that people will receive from a scenario. And the how is, how are you maximizing that? So yeah. Uh, as like consequentialism is fully, you take your actions fully based on the outcomes, the consequences, and disregarding the method almost not completely disregarding the method, but that’s what you’re focused on.
Mitch: It’s completely just regarding the method.
Tom: Yeah. And how you maximize it as well. But as I said, it’s a lot of people consider this to be a bit too demanding. So I know there’s a subset of this one, which is satisficing consequentialism. Have you heard of that? That’s what this, so a lot of people are like, okay, if consequentialism is too much, maximizing it in every single scenario, what if instead of maximizing it, we just produced enough good. Whether that’s pleasure, whether that’s wellbeing, whatever, all the things just produced enough.
Mitch: So there’s like a lower bound, like you just have to fulfill a certain quota of goodness?
Tom: Yeah, exactly.But then a lot of people are saying it as completely irrational because in the light of consequentialism, rationally, if you can produce more than you.
Mitch: should, you should. Yeah. Cause it well, yeah. Using headedness that you try to maximize it.
Tom: Yeah. So they’re saying you can consequentialism if you’re, if you could produce more than morally, you should. So in itself is irrational. That’s the argument that comes up against that.
Mitch: You could argue though the the maximum good for the past and we’ve been not to overthink this, so I mean, so to maximize the good, they shouldn’t put too much in. If that makes sense? So I’ve got it. I’ve got it written down a bit differently in my notes. So I’ve got act utilitarianism, so it’s not hedonistic act. It’s just act, utilitarianism and rule utilitarianism, saying it so many times and they’re like opposites. So act is what you’ve said before is that you must apply it to every single action you make. Not just moral ones is like going to have this Cup of Coffee. Act. Let me think, is this morally right or morally wrong for me to have this coffee? Like do I need it? Do I need the caffeine? Is it good? Is it good for my body? Is it good for society? Where did the coffee beans come from? Were there the people put out by it? Is it, am I damaging the world by it, that’s why you can make the, the cons of is impractical and then you can’t use it. You can’t do it for every act because it would take too long. The secondary one off of, off of that, it’s called rule utilitarianism. Not sure if you saw this?
Tom: No. But I think from your description, I can, I can have a good a guess of what’s coming, but go ahead.
Mitch: Yeah. So you establish more rules that were followed, bring around the best consequences. Is it pretty much as it, so you could say like a good rule that we have societies do not kill as a general rule. But as a general rule, there are some times that is necessary to kill. So there are rules to follow that always bring around the best consequences in most situations. But there are some that you wouldn’t necessarily follow.
Tom: And I guess also that method also brings about the question of are you questioning, evaluating the morality of your consequences of following those rules? Or are you questioning the morality of the consequences of the actions you’ve taken because you follow those rules. And that’s I guess a whole other rabbit hole to throw yourself down at some point. Um, but, uh, I think that kind of makes sense again in some scenarios cause as you would say, you know, and that releases some of the burden as you were talking about on the person to make a in moment decision.
Mitch: So kind of fixes some of the uh, the cons in the floors are that, yeah, but there is one final floor with utilitarianism. And this is one that a lot of, um, if you were to speak to a, like a religious devotee, I would say, who is like deontological, which is like is set scriptures for how they act. It is that a seemingly immoral acts can be justified with utilitarianism. So, so this is a criticism which is both for act and rule utilitarianism like genocides and torture and other evils can be justified on the grounds that they ultimately would lead to the best outcome. So you could, it’s the like the argument that if you’ve got a terrorist and you want to torture him to get information to save a group of people you have with consequentialism as a, as a, as a whole, you would say the act of torture is okay and morally okay because you’re saving more people even though we all know that torturing is bad.
Tom: Yeah. Okay. That makes sense. So that’s the, okay, I get that. And I think it’s is obviously true and comes back to the point that one person, obviously everyone has kind of their own moral code even if they haven’t sat down and defined it for themselves. Right. I don’t anyone sat down and done that, but they’ll find it doesn’t line up 100% with any of these is going to be a mesh of some. And in some scenarios you’ll adopt one and other scenarios you’ll adopt another, I guess…are you done with this? With the utilitarianism?
Mitch: I’m, I’m done.
Tom: Okay. So I guess in stark contrast, contrast then is deontological moral theory and Kant.
Mitch: Do you wanna? Are we diving into Kant or deontological? Cause we can do deontological without going to Kant yet.
Tom: We could, but we do only have 15 minutes left?
Mitch: Oh $%#&, I mean… Oh Frick.
Tom: So I looked at current rather than deontological in, in a whole, cause I think Kant to someone, everyone’s kind of hurdle for not everyone may truly understand. And I’m not saying that I do, I’m not saying that anyone ever does. As I was told the other day you could be doing your phd and studying Kant and you still wouldn’t understand him, but maybe we can skim the surface as we have with the other ones. So Kant said morality is a set of rules we place on ourselves, but they can not be individual because true moral rules, are those that apply to everyone. And this is what he called his categorical imperative and it’s a rational or logical morality that everyone abides by and it’s like a autonomous will that exists inside of us. So he’s basically saying like the categorical imperative is something that commands people unconditionally because it applies to everyone. Like don’t cheat on your taxes, even if it would serve you the interest, for example.
Mitch: So the, the, the full quote is act only according to that maxim whereby you can at the same time will that it should become a universal law.
Tom: So that is one of his moral rules. Yeah.
Mitch: Well that’s a start the main theme from the Casco imperative. It just means that if you’re in a situation and you’re choosing to do something, you have to ask yourself, if, if I did that morally, would it be a universal law? And if it was universal or what would be the implications would be about a good or not.
Tom: Yeah. So I think, take a step back one second. Basically, Kant says for something to be moral, you are doing it for the sake of it being good. You’re doing it for the goodwill. You are doing it because it is a good gesture to do because it’s the right thing to do. Nothing more. So if you’re a, if you’re a bartender for example, and you have to give the correct change, you’re not doing it because you want that customer to come back as you know, it’s, you’re doing it because it’s their change is the right thing to do.
Mitch: Yeah. So you’re talking about his, um, goodwill and duty.
Tom: Yeah. I think that’s kind of the basis for his concept, right?
Mitch: Yeah. So it’s that only acts performed with regard to duty actually have more worth. So things like drinking water doesn’t actually have any more worth because you’re not reacting to a duty. Those are called hypothetical imperatives, not categorical imperatives
Tom: So categorical imperative being the thing that literally applies to every single person. Is it, I don’t know if it’s within like a social construct, but yes?
Mitch: Um, yes, I, I know, I think he was tallimg about everybody to be honest.
Tom: Okay. But he says you can’t be acting on goodwill if you are following instructions. So if you’re at work, you’re not acting morally because you’re expecting a reward or punishment, I guess, if you’re doing it the wrong way. So you’re only making a moral decision if it comes from within. So yeah, you said about Kant’s moral rule, moral rule and something I came across, I don’t know how true it is, is, is commonly misunderstood as something that everyone should do all the time. But I think you explained it kind of perfectly it’s not saying that it’s something we should do all the time in every situation. It’s, it’s asking you okay to think is it okay for that to be a universal law in that situation?
Mitch: Yeah. But when, when, when Kant is talking about it, he’s talking about that you, you don’t do it in every situation. That’s why he says a to is to do with, um, his goodwill and duty section, um, is to do with your duty. So whenever you feel like you’re doing your duty, you then think of it as a moral dilemma. So if you’re just doing something like, um, get yourself a glass of water then you’re not doing anything to your juicy, but if you’re thinking, do I save this person, it’s then your moral duty or a duty of yourself with thinking it would just be a universal rule to save someone in need and you go, yeah, probably. And then you go, okay, well that was some more of a dilemma.
Tom: Yeah. And he also said obviously the categorical imperative comes from rationality and logic. He said if you are being immoral as being irrational, being stupid essentially. Because it would be the the wrong thing to do. Like logically…
Mitch: Which is, the problem with that is it’s purely subjective of an individual, of their society or the sub-section of a society.
Tom: I guess we try to make it not subjective by making it that. I think that was his idea wasn’t it, he was trying to make as least subjective as possible. Um, and I guess that’s why people still don’t understand him. Cause, like he was saying, like for example, if you wanted to lie in a world where everyone lied freely, having truth and lying would not work, they would, we have, wouldn’t have that social construct. So not lying in every situation. Even if it, some people consider it immoral, logically it doesn’t make sense because it would break down the way the world works. So sometimes in by that method, it’s okay to lie because it would break down the world in which we live in. Logically, if that wasn’t the case.
Mitch: I don’t think he really cared about that, to be honest.
Tom: No? That was one the examples that I found, maybe I’ve explained it horrifically, but I think my girlfriend said to me, she studying a masters in philosophy. She said to me, want me to think of is they came from a completely different time in philosophy where science wasn’t the biggest thing. You know, you sat down and thought about how things worked. And especially if you think about through history all the way back to Aristotle and Plato. So with Kant, he thought that the person, an act person’s actions almost came from like the stars and came into you and made the action through you. She’s said to think of it in that light that, these kind of actions that you think about coming down and passing like through you. But at the same time, I know he had a big massive thing on free will…. And that was kind of like central to his theories, wasn’t it freewill. So, uh, I’ve just managed to confuse myself. In that sentence.
Mitch: Reeling this back from freewill to deontological theories, which is what he’s based on um, going very simply, uh, the moral status of an action is to Thomas soliday on the basis of the rightness or wrongness of the action itself. Which means that it’s categorically wrong to lie in any circumstance regardless of the consequence.
Tom: Yeah. And I don’t know cause like someone says that to me and I’m like, okay 90% of the time you’re right. Like personally but that there are some times when it’s, it must be okay.
Mitch: Do you want me to tell you a story? Because I took philosophy and ethics at a level and I, I created a situation that it didn’t, I’ve already, I wrote it in an essay and like quoted myself. It was a bit egotistical of me. .
Tom: . Well with an IQ is high as yours.
Mitch: Yeah. It was call…. Well I called it the model on a podium. So, if a man stood on a podium. With explosives strapped to his chest surrounded by 20 people. Would you be able to shoot the man and kill him? Saving 20 people or would you not be able to because regarding on what morality system you’re using? So, deontological theory, would state that you cannot kill that man. Even, even if he, you’re saving those 20 people, you can’t kill that man because murder is wrong. Doesn’t matter the consequences you are saving those 20 people. Whereas utilitarianism you’d be like, take the shot, save the 20 people. And that’s where I came up with this cause I was like, I was so, not annoyed by this whole, that whole deal political theory. I was just so against it at the time because it was just like there are so many situations that you see in like the news that people are using it as like a scapegoat. Like religion uses the deontological argument a lot in like, um, when you see abortion cases where the mothers, they, they don’t allow the morality of the abortion cause you’re killing someone, uh, to save the mother. They use that as like a, their reasoning behind it because it’s morally wrong to do this one thing even though it’s not a person yet. And then the mother dies and you’re like, okay, well that is a deontological opinion on that situation. Whereas utilitarian theory of that would be the consequences the mother survives and that is a greater than without. So that’s why I came up with man on a podium. Because it’s very obvious on the nose. 20 people die or one person dies. It’s exactly the same as the train lever. But you’ve got the choice to make with the man. Because obviously the man’s fat like it is, is a bad man standing on the podium. It’s not a choice between the five good people and the one good person is the choice between the 20 good people and the one bad person. Deontological would be like, well, murder is wrong, so it doesn’t matter.
Tom: Yeah, and that’s something that I think a lot of moral dilemmas don’t challenge. Is the good versus bad person like there is someone in the middle committing an absolutely heinous act. Is it your moral duty to prevent that heinous act? Bbecause even deontologically, you could be split both ways. Yes, it is wrong to kill, but yes, it is also wrong to stop that immoral act.
Mitch: Yeah, but how would you stop that immoral act? You wouldn’t be able to. That that’s the…
Tom: But then which one wins? If you were deontological, which one would win in that scenario?
Mitch: Well… If you’re going from a religious point of view, you wouldn’t be able to kill that person because you’d be going and have you. You’re going to hell as I, well this is purely Christian. I’m not gonna try and expand that.
Tom: I would challenge a priest though to not choose to kill one person to save 20.
Mitch: Oh, you know, they would argue that god works in mysterious ways and that that’s the choice of the situation.
Tom: Yeah but then if they’re looking at that morally, they are putting their own needs of having an eternity in heaven above that of 20 people.
Mitch: Yep. But those 20 people are going to hell like you would assume there’s 20 people are going to heaven anyway.
Tom: Right. Okay. Yeah. I guess this is where, yeah, yeah.
Mitch: I think this is a dark road to go down, but you bring it, you bring up a good point. Is it good that good versus evil, which is a whole ‘nother section that we can possibly move on to is good. What is good? What is evil, what’s the, what’s the versus?
Tom: And how subjective is it? Uh, you, anything else you want to talk about?
Mitch: I’ve, I’ve got, uh, is there any one more… I’ve got, I think you talked about one which I can quickly cover could virtue ethics.
Tom: Oh yeah, yeah. Go for it.
Mitch: Which is, um, takes root in Aristotles work correctly. Ancient Greek philosopher known worldwide and unlike deontological accounts, which focus on learning and subsequently living by moral rules of virtual accounts, place emphasis on good habits of character. So there is a set of virtues that I have not written down, but things like courage is a good virtue to have. And it’s, it’s working to those like good characteristics. And then if you work on those, you would that imply that you’re being morally good because they’re good, like characteristics.
Tom: My problem with this was how subjective it was. Becuase it all depends on my definition of what a good person would do in my scenario whilst I’m in a scenario that is changing my mind of what a good person might do in that scenario. It’s completely different to what you might think someone might do in a scenario. And I did identify with this somewhat because I remember growing up and being like, oh well you know what would I didn’t have anyone in mind, but I was like, what would a good person do in this scenario? And I think it does have some kind of like religious connotations because it is like, oh, think about what Jesus might do in this scenario kind of thing. But it’s not the worst way to maybe bring someone up. It’s not the worst mindset to have because it does allow you to have that, okay, the good person here would kill that man on the podium to save those 20. But again, that’s subjective depending on how else you were raised or how else you align. So I did identify with that some way.
Mitch: Theirs a lot of our laws being being British. Uh, a lot, a lot of our laws are based on like religious writings and stuff. That’s where our laws take place. Not so much anymore because our laws are now based on case law. So as things happen, meaning change is one of the many awesome things about Britain and our justice system, but the main sort of building blocks where things like that what’s it? The 10 commandments, and then we’ve built on it from there, which is good and bad.
Tom: Oh yeah. It makes sense. There’s not like, oh, it’s no I disagree with, killing people not, not killing people sorry. It doesn’t make sense. Like these are, you know, even if they stemmed from religion, they are also things that just make sense like thieving and not killing people and all that kinds of stuff. It just makes sense.
Mitch: I had a very interesting conversation with my law lecturer. Um, and it was to do with euthanizing people and sort of the legality and the morality around legalizing it. And initially his opinion, uh, I disagreed with because he said, I think it was along the lines of, I’m gunna paraphrase, that in a civil humane society, you cannot have any laws that like justify killing people. And I was like, well, these people want to die because it’s their choice and they should have the choice to do it. But then I came around to thinking like I can see where he’s coming from though, because it seems to go against everything human, to want to die. And you can’t have a civil society where you have laws in place that allow people to die.
Tom: True. But they would be very tight laws, in the countries where they do allow it. Is it, is it Switzerland?
Tom: Geneva in the, in the places where they do allow it, it’s not like 30% of the population are commiting suicide. It’s very few. And it’s those people that are in the absolute extreme situations. They don’t let anyone do it. It’s as you said, case-based. It is, so yeah, I do see the point. In practice that’s not, I don’t think it would have the wide implications that he is maybe scared of.
Mitch: He kind of took me away from the immediate that it should be legalized to actually hang on, let’s think about it. And think about the implications of what would have effect on our laws, on like people, cause I know, I know that the main reason is that people that go over to these places, they want to be at home with the people they love sat in a comfy place, not in a country miles away. Um, and that’s the main reason that I was just like, okay. Yeah. So I don’t have problem with it. But then it seems counterintuitive to have a law in place for it.
Tom: Yeah, I do. Yeah. I do get that. I guess I’m not as far down, I haven’t thought about it. Had chance to think about it enough to be as far down that path as you are. Cause I think I’m, I’m at that barrier where you probably initially where, where you’re like, no, it’s their choice. But I do, I do see the reasoning behind it not having the law’s there. It does make sense. But then counter-intuitively America has laws against that, that has laws allowing the death penalty.
Mitch: Yeah. I’m very much against the death penaly. Oh I can’t remember what country it is but there is one I’m even more against not typically because I’m a guy because of like false positive rates, but it was um, chemical castration, for rapists and pedophiles to stop them in the future. And a lot of people would instantly be for it because it’s a very knee jerk reaction I would say. And rightly so, cause it’s, it is a one of those topics when you think about how many false positives people would get. So they’re being imprisoned and they’ve lost the trial due to some reason, even though they were not guilty. It like from the action they’ve been found guilty, but saying that they didn’t actually do it and then they gotta live with that with for the rest of their lives, you can’t reverse that if like new evidence came to light. That’s, that’s an irreversible thing they’ve done.
Tom: That’s the issue with both the justice system as a, as a whole where you still haven’t nailed that down 100% yet. As a society.
Mitch: Plus and a physiological, if you were to cut a man’s member off for being like a repressed and pent up. It actually increases that. So they’re more like to go out and offend, just not like they won’t go and rape someone, but then they will go out and sexually attacked people. So there’s a, there’s a whole bunch of like, you have to think of these things.
Tom: I think that’s people as you, as you say, like knee-jerk reacting in like, oh, he did that, let’s chop off his genitals or castrate him without thinking about the scientific data that is there showing that if you do this, it makes it worse. Because people will see, they get very angry about this and 100% they should, but they kind of, you know, they see red a little bit and it’s kind of stops there the reasoning.
Mitch: Yeah. Which is a great example for these moral theories and great example for subjective ones because that, that is a subjective choice that people are making on a moral dilemma and it’s spinning affected by emotions. Uh, and sort of like that thing. Which is a great like to be like here’s a massive con for utilitarianism, isbecause it emotional bias and influences come into that.
Tom: Yeah. But is that a con? Oh yeah. Okay. I guess when you get emotional, if you put it like emotional bias, yes. That is a con. Yeah. I don’t know. It’s hard isn’t it? It is very… I was thinking. I looked at myself personally like okay, where do I line up? And I was thinking in some instances is fairly Kantian cause I’m like, well there are some instances it’s like case by case Kantian if that makes sense. Whereas in some instances, okay yeah you’d never want that to happen. But really I think I’d come down utilitarianism. Yeah. Some other instances as well. Cause it’s not like I want to maximize, not hedonistic. I don’t want to maximize pleasure. Maybe. I think that’d be the wrong way to look at it. Well the right way, but also, the wrong way to look at the world.
Mitch: You don’t have to be a subset. It can just be utilitarianism.
Tom: Yeah. So it’s was like, well, wellbeing. That definitely makes sense. But maybe also not wellbeing to purely humans. I think maybe we’re stepping into the age now where wellbeing can consist of other entities in our world, like the, the wellbeing of the planet. Well, yeah, if you look at it as one living thing, even though I guess it’s not, but if you did, you could have some kind of moral and ethical virtues towards that.
Mitch: Yeah. But that’s the thing with utilitarianism, it doesn’t define good so that that comes within the scope of good for you.
Tom: Yeah. Okay. So that does leave like a lot of subjectivity.
Mitch: Yeah. That’s just one of the points of contention with it. It just says good or wellbeing or pleasure. Yeah. And a pleasure. It could be knowing that the RF is going to survive for another million years, I’m not gonna destroy it. That could be pleasure for you.
Tom: Yeah. Yeah, that’s true.
Mitch: Like the pleasure and pain doesn’t actually have to be physical. It doesn’t have to be sexual or like through food or stimuli. It could be like, not like thought and knowledge…
Tom: an ideology almost?
Tom: Yeah. Okay. Where would you find that you are there?
Mitch: I’m very much utilitarianism.
Tom: You see any other aspects of yourself in there or? Um, I feel like if I’m in a situation, I do think about the consequences of my actions or the situation.
Mitch: From growing up you’re always told to be like, oh, well what are the consequences of your actions? People like you’re told to be like, oh, what are the consequences of you doing that? When you’re being told off, like, did you think about the consequences? So we’re taught or conditioned if you want to be like that, to be utilitarian is we’re always taught to think about the consequences, not the action itself.
Tom: Yeah. And I guess unless you used to go and study philosophy a higher level, you’re not learned enough to make the decision to be something else.
Mitch: Yeah. So I would say I’m very much into consequentialism. I wouldn’t say there’s ever been ever been a situation where I’ve been like, this is wrong because I’ve said it’s wrong. I don’t care that I’m going to gain out of the consequences. This is wrong, but I can’t think of a situation like where that’s happened.
Tom: I have a final question. Is Morality, nature or nurture?
Mitch: Um, it’s definitely nurture the, that is, that is a very, very, very big question. To just be like, oh, is it this?
Tom: Final question, we have 2 minute… big question.
Mitch: There’s a very big question. A lot of people would say, morality could be innate.
Tom: I think some things are…
Mitch: And then, and then we are, we are taught things but I, I don’t know, I just can’t see things being innate. Like if you were to take someone and put themselves, put them away in a cell, they wouldn’t and like didn’t have any sort of conditioning of society growing up. They would just, I don’t think you could accurately say that that person wouldn’t go and kill people.
Tom: But also you’re making by taking someone and putting them away in a cell away from society, you are disabling that person emotionally. So as at the same time, not an accurate representation of nature versus nurture. Nurture them to be, you’ve nurtured them, to lack it, to lack anything.
Mitch: Okay. So there’s, this is a better argument where, how, how would you go about, uh, testing the whole nature versus nurture? Because then everything would be nurture in my opinion. You wouldn’t be able to test nature. So nature wouldn’t exist if that’s the case.
Tom: If we look at animals and that comes back to the evolutionary theory that I kind of came back to earlier like that…
Mitch: I disagree with that.
Tom: If you know what I mean? Because they don’t go around killing each other all the time. Some animals do. Okay, sure. If an ant sees another ant colony there, it’s going to mess it up. That’s just how they do it.
Mitch: I disagree with being able to draw parallels between animals and human beings.
Tom: To what level?
Mitch: Morally don’t think you can, can morally be like we did the whole…
Tom: Uh, I don’t mean I’m not drawing parallels. What I mean is if you’re looking at animals, you’re looking kind of where morality may have come from in a less developed brain structure. So evolutionarily where did morality come from? It’s got to come from somewhere…
Tom: So if we look past back at evolutionarily, how do, how morally do animals act? And we can see that they do in primates at least and cetaceansbut.
Mitch: But their acting morally based on things that we’ve put there, if that makes sense?
Tom: I mean that is true but they also, they make up, they reciprocate the, they have a fight, they’ll like makeup because.
Mitch: But are they acting morally or are they acting out of instinct cause they need to further their gene line?
Tom: But then if we are not killing other humans because we don’t want to receive the reciprocation of that, end line comes to it. We are doing that to preserve our own gene lines also.
Mitch: Interesting. I think it’s hard to be like…
Tom: I I do agree it’s like 90% nurture.
Mitch: …lets look at other species when we’re looking at our morality where our morality would, would’ve started I would say. Cause I reckon there was a lot of killing each other when we didn’t have language and that language that would be like, let’s go back to our earlier podcasts. It’s all linked. The language would have started us off into being intellectually higher but then because of that we were then to have gained morals and they would have changed over time obviously. But then that was like that’s where. You can start saying that things have morals is because there’s language there.
Tom: That’s how we can talk about morals. That’s how we as humans can talk about morals. It doesn’t mean that it doesn’t exist.
Mitch: We’re then going back to defining what a moral is then.
Tom: Did we ever define that? I don’t know if we ever defined that. We, we talked about different moral theories.
Mitch: Yeah. Because I would say I like a moral choice for us is at something higher on a higher level. Cause we can debate and we can think about things which aren’t just instincts. Like I can go have a bar of chocolate that’s not going to do anything for me. Even like evolutionarily or in like the animal kingdom.
Tom: True but then go eating bar of chocolate wouldn’t be a moral decision like I agree with you it’s a completely different level.
Mitch: Animals would go eat food and they would hunt to survive. Is that a moral decision for them or is that just an instinct? Because that’s what I’m saying their decisions aren’t moral decisions because they’re instincts. They’re acting out ofinstinct.
Tom: But then if we’re reacting to our food, to our stomach saying it’s hungry.
Mitch: Yeah but I’m not, I’m going to get that bar chocolate because it tastes good. That’s what I’m saying. Thats the difference and why don’t think like you can compare like animal morality or you would you say you can be able to say that animals have morality or animals can’t. Could you say that an animal is acting immoral?
Tom: Yeah. Because they deceive each other for their own gain and I know to them they don’t have that concept of morality. But to us is something that we are using to describe a situation even as we discussed at the beginning we are, we are bound by this unknown set of rules that we are labeling morality. Just like Lasagna.
Mitch: Hm. That’s what I’m saying. But they’re not acting immoral. We, we are defining them as a moral but they’re not immoral. So you were projecting our standard of morality onto these things that can’t even comprehend morality.
Tom: But just because they can’t understand quantum physics doesn’t mean it doesn’t exist. .
Mitch: That is very different. You can’t put a scientific approach on something that is purely based on like psychological tendencies.
Tom: I definitely see your point and I am not, I’m not like totally against it. I was just wondering like the theory of like evolutionarily if we can see where it’s come from and we can see maybe obviously to them it’s not a moral thing. They have no concept of morality. Maybe we don’t know. So I was just wondering, well where does that come from or is that how we were in the past? Maybe is that who have we gone past that stage probably, or to become a completely different route.
Mitch: I think we came with a completely different route. I don’t think you can, I don’t think it came from that. I think when, when we were that to the point of non, like not being able to speak and linguistically talk things and then that point we diverge. So ’til then be we’re working on instinct and just hunter survivor gathering stuff to survive. I think. Okay. That, okay. I think that’s a better, a better point to change when you’re living for survival and not.
Tom: Yeah. I think that was my confusion with the, they’re going there to eat against getting a chocolate bar because it’s not, you’re not driven by the same instinct and the same situation.
Mitch: That’s why I don’t think morality is innate because if it was innate, all humans would have the same morality and we don’t.
Tom: Yeah. Okay. Well the interesting food for thought, isn’t it? All right. But I think that is all we kind of have time for this week and we have gone over massively. Um, so if you guys want to check out all the latest goings on, you can head to conductscience com. You can find us on Facebook, on Twitter by searching @conductscience. If you want to get in touch, suggest to guests. Just a topic for any of our shows, please use the #ConductScience on Friday. I’m doing something a little bit different with the method section where we’re doing a guide to coding bootcamp. So check that out. And next week we are going to be talking about addiction. So look forward to that. This has been incredibly interesting episode and I’m sorry to all those philosophers out there who we’ve offended by getting all this stuff inherently wrong. But yeah, so that is it from us this week. So we’ll see you got guys aaa-next time.
Mitch: Speak to you later.