Hello World

How moral is your machine? Ethics in computing education

May 10, 2021 Hello World Season 1 Episode 2
Hello World
How moral is your machine? Ethics in computing education
Show Notes Transcript Chapter Markers

This week, after watching too many science fiction films, Carrie Anne and James ask, "How moral is your machine?". We explored if computers are simply tools now that they are increasingly capable of making recommendations and taking decisions for us. What are the consequences if developers do not take the time to consider ethics and morality in their programming? And how, as educators, we can usefully bring this conversation into the classroom to engage and inspire our learners?

Show notes

Diane Dowling:

It was all sort of hyped up, the algorithm has decided this you know the algorithm made this really rubbish decision? But of course, it wasn't the algorithm it was the programmers who write the algorithm.

Marc Scott:

I think one of the biggest challenges is a programmer's unconscious biases,

Carrie Anne Philbin:

That it is, of course, all the algorithms fault. Welcome to Hello World, a podcast for educators interested in computing and digital making. I'm Carrie Anne Philbin a computing content creator of the Raspberry Pi Foundation, currently leading the development of the Teach Computing curriculum.

James Robinson:

And I'm James Robinson, a computing educator. I'm working on projects promoting effective pedagogy within the subject, if you will, to support our show that subscribe wherever you get your podcasts and leave us a five star review.

Carrie Anne Philbin:

Today, we're diving into ethics in computing. And I have a question for you, James. How moral is your computer?Oh

James Robinson:

Oh that's such a tough question for a Monday afternoon? Carrie-Anne. I think it's a really difficult question. There's a couple of ways we could answer that. We might look at how we think about computers. We might just simply look at them as tools to help us do things, to create things. And so we might think, well, actually, how can a machine be moral? How does morality or ethics come into it at all? But I think where it gets a little bit interesting is that unlike lots of tools, human beings as a species, we invent tools all the time to do things for us. A computer is one of the first tools that we've used to actually do some of our thinking for us, and decision making. And the moment we start to kind of outsource our decision making or thinking then actually that's where morals and ethics become really, really important. And some of those decisions might be really simple, like what should I watch next on my streaming service? Or to the really complicated things like which route shall I take and whose life shall I risk in this automated self-driving car? You know, all of these decisions are really important, but some are more important than others. So what are your thoughts on the topic, Carrie Anne?

Carrie Anne Philbin:

Well, I noticed you managed to completely bypass my question there and not actually answer it at all,

James Robinson:

That is not like me Carrie Anne. I'm normally very direct

Carrie Anne Philbin:

Well, I think right now my machine is not very moral at all. I'm not sure that my computer or my phone necessarily has the capability to make decisions and therefore have morals at this point. But I'm guessing some of our guests might argue with me about that. But I do see social media kind of giving me recommendations. You talk about Netflix, some of the services that I use, starting to recommend or make decisions for me. And I definitely do see that as a sort of sliding, stepping stone scale towards ambiguity. And that makes me very, very nervous. So I can definitely see it from that point of view. But I think really I want to understand more about what are the questions that I need to be asking, why should I care about it? And I guess more importantly, how can I talk to students and young people about this that both makes them interested and excited about it, but also makes them really understand and connect what they're learning and computing with what is happening around them at the moment and in the future.

James Robinson:

And I think something else you mentioned there was that sort of nervousness. And I think there are a lot of people out there that maybe aren't as informed about computing and morality and ethics and the importance this plays and actually that some people are fearful about the role that AI might play in the future. And that might include some of the people from older generations, but also some young people will be quite fearful. So I think it's imperative that as educators we inform them, we give them that that knowledge to empower them to feel secure in the future where AI is going to play an ever increasing role.

Carrie Anne Philbin:

I mean, I'm constantly terrified of how the computer from 2001, A Space Odyssey becoming a reality and find myself jettisoned from an airlock, but maybe we're I mean, maybe is around the corner, but maybe we're a little bit far away from that at the moment. But it is a constant worry. Luckily, James, we don't have to worry about this too much because we've been joined by two amazing guests who are going to help us solve this riddle. We've been joined by Diane Dowling of the Raspberry Pi Foundation, who's currently working on Isaac Computer Science and wrote an article on ethics and computer science in Issue 13 of Hello World magazine. Diane, can machines be moral?

Diane Dowling:

Well, Carrie Anne. That is an amazing question. So, yes, I believe they can. I mean, that's fundamentally I believe they can. But I think there is a huge responsibility on us as computer scientists to make sure that they do make correct moral judgments. And I think the really big thing that we have to do is to really understand, you know, what computers are currently good at and what they're not so good at and the way that currently computers make decisions because clearly, things will change and it's it's early days for this. But my appetite was whetted for the subject actually some time ago when I went to the science museum and there was a little section on artificial intelligence. And part of that display was a project from the MIT in the States that was actually collecting data. So that in itself was quite interesting because obviously when you're you take the students you directed towards it immediately, you encourage them to give over the data. But the project was the one about the driverless cars and you had to say what decisions you would make when you were faced with a particular dilemma. So on the screen in front of you, it popped up, first of all, with some quite easy decisions as to basically you could swipe to the left or swipe the right. You were always given two choices. Basically, the choice was which one were you going to hit? So, for example, in the early decisions, it was things like, you know, you going to hit a tree or a wall. And then it started to be real things that you were hitting real people. So you have a choice maybe an older person or a younger person, a cat or a dog. And it was remarkably tricky because clearly as a driver, it's possible for me to have to make that choice. But I would never really sit there and think about it beforehand. It would be a split level decision. But that was really where my interest was piqued. And I started to really think about these moral questions that computers in the future would have to make. Thanks Diane, we're also joined by Marc Scott, who's a teacher and passionate about open education, resources and also ethics in computing. And so Marc, Diane's sort of gone some way to answer this question already. But what kinds of ethical dilemmas might we be asking machines in the future to solve for us?

Marc Scott:

I think until we get to the situation where we have what's known as strong artificial intelligence, so artificial intelligence can actually make reasoned decisions on its own, I don't think machines really get to make ethical decisions. I mean, at the end of the day, the machines follow programming instructions and a programmer has typed instructions into the machine. So when it comes down to it, it's the programmer that's always making the ethical decisions. The machine is going to choose, for instance, in Diane's example, whether to move left or right based on a set of rules that have been provided by the programmer. So the programmers made the ethical decision about what the machine is going to do and the machine is just following those instructions. This happens at the moment. This is happening right now. The Boeing 737, Max, I think it was, aircraft had a series of recent disasters because the machine was making decisions and overriding what the pilot thought the plane should be doing. So when the machine, the computer on the plane thought that the plane was climbing too steeply and the computer program overrode the pilot's decision to climb the plane and decided instead to to basically crash the plane. And the pilot had no ability to override it. And that's simply because of a faulty sensor. But the programmers of that computer program had made that decision that based on sensor data, the plane would either climb or descend, it was the programmer that was making ethical decision, not the computer. And hopefully we'll get into the difference between what ethics and morals are a little bit later.

Carrie Anne Philbin:

So, Diane, listening to Marc, it sounds like these might be just concerns that programmers have right now or programmers of the future may have. Why should we or why should our students care now about this question?

Diane Dowling:

Well, here in the U.K., we've had a really great example recently, which is topical to every young person. I'm sure it's similar in other countries. Because of the global pandemic exams, so formal exams, have been canceled and they've been cancelled again this year. And the decision that was taken, that an algorithm would make the decision about the final grades the students got. And that actually exposed a lot of issues. As Marc said, it's very was all sort of hyped up. The algorithm has decided this, the algorithm has made this really rubbish decision. But of course, it wasn't the algorithm it was the programmers that wrote the algorithm. This is something that is really interesting to debate with young people, because actually as a as a response to that and as a response to this sort of hatred almost of the algorithm that this year in the UK, it's been decided that teachers will make the grade assessments. And, of course, that's left teachers quaking in their boots. It's a huge responsibility for individual teachers to have to do this. So I think it's a great area for debate, as the fact is that algorithm is less subjective. You know, once you've programmed it with the rules, so long as the rules have been programmed correctly, hopefully, it will make the correct decision. I mean, clearly, it would have been a lot better if it would be well tested beforehand. But go back to your question Carrie Anne, I mean, this is something that students can you know, they can understand. They can engage in how they can they can really get the idea of a computer making a decision which is far more relevant to them than something like driverless cars, because most of them won't drive yet.

James Robinson:

And I think that's a really interesting point you make there, that example about the exam results from this year. In that, as Marc has alluded to, somebody has made that decision at some point, somebody has sat down and designed the algorithm, the data model that is going to help the computer make those decisions and put a student A in category one or category to give them a certain grade or another grade. And I think maybe what's unnerving for people is that decisions are being made en masse by one or two people or a small group of people. Rather than that sort of situation being judged at an individual level by somebody that knows them. And so maybe it's sort of compounded by that, maybe I think as well. I think it's really interesting. Marc ou alluded to maybe some difference between ethics and morals. Do you want to kind of delve into that a little bit further? Because I find this really interesting, the fact that I think these two words people might associate as being the same thing. But there is there is kind of a fundamental difference there. What are the two words mean to you?

Marc Scott:

Well it is just what they mean to me and not necessarily what they mean to everyone. And so I'm not going to put words into any philosopher's mouth or anything. It's just this is just what I feel. Morals are whether something is right or wrong, whether an action generally is right or wrong. And there's a huge amount of culture that goes into those decisions. So if you live in a culture you're brought up in a country that's major religion is one of the Abrahamic faiths, for instance, the chances are most of the Ten Commandments probably resonate with you in some way. So thou shalt not steal. Is, that's a moral thing, whether you should steal or not? It's it's it's immoral to steal. Yeah. If you were raised in in Sparta, in ancient Greece as a young boy, you were taught the steal was an absolute positive thing and you were encouraged to do it. So stealing was moral. Always with morality, I see it as a there's a very binary choice. Something is either moral or immoral it's it's a set of actions. On a slightly different side. There's legality as well. So you can have something legal or illegal there again very binary decisions. You know, these are rules written by people in power. They're not necessarily moral, legality. and morality doesn't that they don't necessarily you can have laws that are immoral so you can have actions that are immoral. But legal and actions are moral but legal. Ethics is completely different though. Ethics is where, you know, you're weighing up the pros and cons of multiple decisions. So, again, going back to a ten commandment, thing thou shalt not kill is killing is immoral. However, killing one person, save the lives of ten people. That might be immoral, but it might be ethical. So when you're making ethical decisions, you're taking everything in the round. You're weighing up lots of moral, immoral decisions, maybe a lot of legal and illegal decisions. You can break the law but perform a very moral act. And this is why computers are going to really struggle with ethics. So it's moral to give charity. Yeah, that's what everyone would more or less universally agree. It's morally good to give to charity. So if you program a computer to take control of your bank account and act morally, it could run amuck and just end up giving all your money to charity. And that's a moral thing. But then you and your children end up going hungry because you've got no money. So it was an unethical action for the computer to do and similarly like Me giving.... That's where ethics becomes really, really tricky. You've got there's so many different inputs into ethics that humans are really good at doing. We can instantly make that decision. We can instantly make that judgment call. Me giving a hundred pounds to charity is a good thing. Jeff Bezos giving a hundred pounds to charity is like "so what?" all right.

Carrie Anne Philbin:

So, you know, clearly the questions around kind of ethics versus morals and question around these sort of dilemmas. These are ways in which these are topics and discussions which really bring a lot of relevance to computer science to our young people by the sounds of things, what are the concepts and the issues that we need to teach our young people so that they can understand this sort of area of debate? Diane?

Diane Dowling:

I think one of the big things that is lacking at the moment is that informed consent for your data. I think it's really you know, we all have to give consent to hand over our data. And we all know that we have to give that consent, and we know that data protection acts in their various forms of that to protect us. But one of the big problems is that when we hand over our data, there's so much you know, there's so much stuff that you have to read about how that data is being used, but nobody ever reads it. So, you know, I give my data currently, you know, I'm happy to hand over my health data. So I have a Fitbit device that measures my steps and my heart rate, stuff like that. And at some point, I think because obviously a lot of people are doing a lot of analysis at the moment and on people's well-being. I said, yeah, you can have the data from my Fitbit. Now, that data probably will be used in the future to make. as part of one of these kind of decision-making processes. There will be data in there you know saying this person of this age, is this fit on average and therefore, you know, we can give this treatment all this course of action. I because I love technology. I was happy to give my data, but I think we need to explain far more to people exactly how their data might be used in the future, exactly how it might be used, and particularly this data that you're handing over for research purposes. I think we need to make that far more transparent. We need to educate people, you know, morals and ethics and legal, as Marc said, have been on the curriculum for a long, long time. But it's really only now that you're starting to really see some really nitty-gritty decisions that maybe will arise from the data that you're handing over. So I think basically it should be a bigger part of the curriculum. People should be debating it. They should be discussing it, and they should be really thinking about them, particularly when we're teaching computer science. The young people are teaching are going to be the computer scientists of the future. They will actually be working on systems that make these decisions.

Marc Scott:

And I think one of the biggest problems is what I said at the beginning, that computers and computer programs don't make ethical decisions that's often down to the programmer. And I think one of the biggest challenges is a programmer's, unconscious biases and how they come into what decisions their computer programs make. We've seen this a huge amount in machine learning at the moment. It has been quite a lot of controversy over facial recognition software. That in some cases is informing the police over who is the criminal suspect and who is not the criminal suspect. But based on the training data that humans provided to the machine, the underlying algorithm that comes out of the machine learning model ends up to inherit those same biases as a programmer, and that can happen a huge amount. So it's really a lot of it's about training programmers and making sure the programmers are aware of their unconscious biases so that they don't enter into the work of that producing.

James Robinson:

I think that's really interesting Marc, because earlier on we were talking about programmers being the ones making decisions and then the examples we're talking about there where machine learning is involved where machines are kind of looking at a massive data set and making some, you know, surfacing some insights based on the data set. There isn't necessarily a decision an active decision by the programmer. It might just simply be that the data that you put in determines the decision that you get out or the the sort of the learning that you get out. And so it's about making sure that the data we feed the system, we train the system on, is as representative and unbiased as possible. And I think that's a really great point that you make I think there are lots of good examples out there in the media at the moment that teachers can draw upon to discuss with their classes.

Marc Scott:

Yeah, it's not difficult to imagine a situation where a self-driving car in the future, because of the data is being trained on, ends up always avoiding and saving the lives of a certain demographic, but ignoring another demographic just because of the training data it's been provided. And so, obviously, the machine is making incorrect decisions that are very, very unethical. But it's the fault of the programmer and the training data that they provided to the machine.

Carrie Anne Philbin:

I think I'd also argue that not only should we be feeding the right data and having considerations around that, but also argue that the reason why we should talk about this with young people within formal learning contexts is because that is where there is diversity. And to really inspire those young people to want to go into technology and become those programmers of the future, I think will help ensure that there is a diverse representation across particularly machine learning and A.I. And so I would argue it's important that we really carve out some time in our curricula in our school days, weeks, to find opportunities to dive into these topics. So, you know, how do we go about doing this? How when is a good time to introduce ethics and computing? What's the right age? What's the best way? What tips do you have? What advice do you have to teachers to start driving these conversations?

Diane Dowling:

From my own experience, I've only taught 16 to 19. So, you know, to some extent when you introduce it is you know, it may be hard for me to answer. Although I know that students even at junior school, so that in the UK that some up to age 11, they will have philosophy they will they will do philosophy and ethics, I think, as a subject. Maybe as part of another bigger subject, but certainly I think those conversations are already going on, but maybe not really in the context of computer science. So maybe some of that is making sure that other teachers are maybe aware of what's on the horizon and maybe tweak the content so that it is appropriate to computer science as well. One thing that I used to do in terms of introducing the subject of morals to my students was nothing really to do with computer science, but just asking them whether they would read their sibling's diary. So you see a diary sitting, they've left the diary out, would you pick it up and read it? I just got them to do some voting on it. And it's really interesting and then to actually explore the reasons why, you know, some of them would not. And it's kind of one of those things that, you know, it's really simple. Everybody understands it, but it sort of opens up a discussion about morality and where we get our codes of practice from. Yeah, it was fun and quite engaging and lead to some good arguments in the class. And then you can obviously build on that to tackle some more tricky issues. The one thing I would say is and I think I made this point in the article, is that there are a lot of quite sensitive issues around morality and ethics. In fact, the one that Marc used earlier of the plane, you know, you've got to be very conscious of the fact that some people might get quite upset by some of these discussions. And I think as a teacher, obviously you've got to handle these situations skillfully. But this isn't just this topic. This is true for many areas of teaching.

James Robinson:

And you mentioned something there Diane, that as a trained maths teacher and computer scientist like who very, very used to sort of binary kind of answers. I think the D word "discussion", I think may be something that might make a few teachers of the subject a little bit uncomfortable. So do either of you have any kind of advice as to really interesting kind of practices or practical activities that can help teachers bring this subject alive, particularly if they're maybe slightly nervous of engaging with this slightly sensitive, very talky kind of topic. Any thoughts from either of you?

Marc Scott:

I mean, the last time I taught this was at the secondary school and I taught it to year sevens and year eights, and the way I introduced it was through the famous trolley problem about whether you, with the trolley going down, the train, going down the track and whether you pull the lever to make it avoid somebody and then ends up killing another three people. I had an excellent, excellent lesson on it, I wasn't the center of the discussion, so the way I approached it was I, I, I gave them the trolley problem. I explained the trolley problem to them. And then I had the children use whiteboards to write down what their feelings about it were, what their decision would be. And then they then discussed in small groups and then eventually reported back and then just increased the dilemma each time just made it and more and more difficult for them to try and make decisions. But I kept it very much that they were the ones doing the talking. And it wasn't it wasn't necessarily lead by me. I think if you're going to introduce it, I mean that the trolley problem was a little bit hackneyed now and a little bit old and almost instantly relates to self-driving cars, which, to be quite honest, the average eight to nine year old doesn't care about at all. But things like a YouTube recommendation system is like it is really relevant to them. Children do not listen to the radio anymore and they do not have Top of the Pops, which is just like stunning that they don't have access to the latest top, top music. So they're getting their music mainly through YouTube. And my own son has entered a feedback loop where he's now into swing music from the 1920s and 30s because of a YouTube feedback loop. So you listen to one track a few times, he likes it and it feeds into that and it feeds into that. Children will be really well aware of how cliquey they can become about their music and the shows they like because of the YouTube feedback loop. And then you could instantly flip that over to start talking about some of the more negative aspects of that. So, for example, the Netflix recommendation system for me, I watched an incredibly tactful and lovely documentary about flat earthers that at the end of the day, it was about how flat earth is a generally wrong. But they treated it in a really delicate matter and a delicate manner. It was really good but because I watched a movie, a documentary about a conspiracy theory, the next recommendation from Netflix was alien autopsies. It's easy to fall into that that rabbit hole of going down the conspiracy route and end up through your YouTube recommendations. So that's that's something that children will understand. They understand these recommendation systems, but you can show them how those recommendation systems can lead people on very, very dark paths.

Carrie Anne Philbin:

I mean, I wouldn't look at my Netflix recommendations because they are all, toddler you know, cartoons that would resonate with a sort of three to four year old audience and not really much else, which means I don't get to watch anything that's cool at the moment, but there you are.

James Robinson:

All that toddler stuff's cool. So we're kind of touching upon pedagogy there, which is really interesting. And I think, you know, a lot of sort of I.T. and computer science teachers may have developed pedagogies that are very specific to those subjects. And I think that there are lots of ways that we can develop and broaden our sort of teaching practice, either by sort of online courses there's a great one on Future Learn at the moment called "How to lead classroom discussions", which is from Raspberry Pi, but also maybe looking at other subjects and the teachers and the sort of practices they use there. Are there any kind of subject areas or teachers that you as practitioners would sort of look at or observe, Marc and Diane?

Marc Scott:

My wife's an English teacher, so I'll go straight to English as a subject. When you're doing any kind of literary analysis, you're always looking at like motivations of characters and things like that. And ethics and morality comes into a lot. Look at Macbeth and the ethics and morality of Lady Macbeth and why she did what she did give us some history lessons, because that's full of morality and ethics and why people make decisions. And often those very, very difficult decisions, ethical decisions where leaders, politicians, generals are making decisions that are definitely immoral, but maybe ethically correct.

Diane Dowling:

And I think again, yeah, well post 16. Psychology is a great one to go and visit because any psychology experiment is always going to be guided by some kind of code of ethics because so many of them have exposed many ethical issues. So, again, you know, I found that quite useful, interesting in the staff room, even debating with psychologists and also post 60 law. So if you have a law department, they can be really helpful. And I think, again, going back to something really early, which Marc said about the legal issues, so, you know, it isn't just morality and ethics, it is also legal and making sure that students understand the difference between the three is really helpful. But, yeah, we've got loads to learn. And I think it's great for computer science classes, which tend to be generally stuck in front of a computer system, certainly as they progressed through their education to actually introduce some of these more lively discussions and debates in the classroom and to draw on best practice of your colleagues is really helpful.

Carrie Anne Philbin:

We also took some questions to our audience to find out your thoughts on this topic. We asked, what do your students think about ethics in computing and how do you approach the subject? @Advanced_ICT on Twitter said, "I discuss news items with the students and create a list of my favorites".

James Robinson:

And there's a lovely comment here from "MissMComputing(@Mull_AM) who talks a little bit about, similar to what Diane was saying she starts by posing some general ethical dilemma questions for the class to discuss. They also use MITs moral machine to help link ethical issues with CS and algorithms and get students debating the issues that arise from it. And then they look at sort of investigating the use of robotics, etc.

Carrie Anne Philbin:

And @MCACompSci says that what they do is they check their breaking news, whatever service that they use, technology section, and then they use that to base their classroom discussions on. If you have a question for us or a comment about our discussion today, then you can email us at contact@helloworld.cc or @HelloWorld_Edu on Twitter. My thanks to Diane Dowling and Marc Scott from the Raspberry Pi Foundation for joining us for today's discussion.

Which leaves me with just one final question:

James, what did we learn today?

James Robinson:

Well, personally, I found the whole discussion around ethics, morality, legality and the kind of having a really firm understanding of those topics and how machines will struggle to kind of deal with them is a really fundamental kind of challenge for our students to understand. So, yeah, that was that was my sort of takeaway from today. What about yourself Carrie Anne?

Carrie Anne Philbin:

Well, I've learned that it is, of course, all the algorithms fault.

Introduction
Interview with Marc and Diane
Audience questions & close