This week on the show, we talk about how a Google engineer claimed their AI was “Sentient” AI. We also answer some questions from the community about research Ethics and presenting information to the public, the pros and cons of going to grad school for HCI/UX/Human Factors, and proofreading reports.
Recorded live on June 23rd, 2022, hosted by Nick Roome with Barry Kirby.
Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on The Surgical approach to Human Factors:
It Came From:
Let us know what you want to hear about next week!
Thank you to our Human Factors Cast Honorary Staff Patreons:
Human Factors Cast Socials:
Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.
Welcome to Human Factors Cast, your weekly podcast for human Factors psychology and design.
Hello, everyone. Welcome back to another episode of Human Factors Cast. This is episode 249. We're recording this live on June 23, 2022. I'm your host, Nick Rome. I'm joined today by Mr. Barry Kirby. Hello. There he is. Presumably I'm still here doing okay, too. This week on the show, we're going to talk all about how a Google engineer claimed that AI was sentient. So we're going to talk about sentient AI. We're also going to answer some questions from the community about research ethics, presenting information to the public, the pros and cons of going to grad school for HCI UX and Human Factors, and how to proofread your reports. Okay, but first, we've got some programming notes here. I want to make sure that we're still in full swing with the Pride stuff over here at Human Factors Cast. We got our first deep dive dropping sometime soon, almost ready to go. I do want you to look out for that one because it is a great deep dive. Lots of information about designing for LGBTQ IAP. Plus we also have our latest Human Factors minute out there as well. Fundraiser is still happening. Become patron, buy merch, contribute to the LGBTQIAP, plus youth do all that. Hey, tomorrow also, we're going to be doing a town hall. We've done this before. It's going to be at 10:00 a.m. Pacific time, so I guess that's 01:00 p.m. Eastern. Barry, what is that? It's your time. I don't know. A different time.
Anyway, I'm going to be sitting down with Chris Reid, Carolyn Somerick, Tom Albin, and Farzan Sasson gohar friend of the show. So lots of good discussion tomorrow. Please join us. The links are everywhere. I'm sure you've probably seen it in your emails. It's also on LinkedIn, so go check it out. Love to hear from you all. Ask them tough questions. All right, before we go on, Barry, what is the latest from Twelve Two? Well, on Twelve Two, we've just had the latest episode drop on Monday with Peter Brennan, who's a professor. He's an NHS consultant in the UK, and he specializes in head of neck cancer, which is really interesting. But one of the really things you're trying to do is drive Human Factors approaches, particularly in his operating theater. So it's well published and just a really interesting guy to listen to. He's really opened up my eyes a lot into really how much human Factors kind of isn't used in the medical domain or that there is a lot of good people doing good things, but there is still so much to do. Yes. And he's even built an awful lot of really cool conversations on Twitter, Facebook, and LinkedIn. So, yeah, go and have a list of that and see what you think. And we talk about all sorts of things from the human facts itself to just culture and things like that. So well worth listening. Awesome. Well, hey, it's that part of the show again where we like to talk about human factors news. So let's do it.
Yes, this is the part of the show all about human factors news. This is where we talk about anything with field of human factors. It's a fair game for us to talk about as long as it's interesting. Barry, what is our story this week? So this week, Google places an engineer on leave after claiming it's AI sentient So, google engineer working in responsible AI division revealed that he believes one of the company's AI project has achieved Sentience So Lambda, which is short for Language Model for dialogue applications. The chatbot system that Google has been developing relies on Google's language models and trillions of words from the Internet. And it seems to have the ability to think about its own existence and its place in the world. After discussing the work with representative of the House Judiciary Committee, the company has placed the employee who's claimed that the Lambda Sentient on paid administrative leave over breaching its confidentiality agreement. Google has flatly denied the argument that Lambda was sentient, saying that there's no evidence that Lambda was sentient, and there's actually lots of evidence against it. It seems to believe that Lambda has miraculously turned into a conscious being. The employee unfortunately, doesn't really have that much proof to justify the provocative statements. It is admitted that his claims are based more on his experience as a priest rather than that as a scientist. We don't really get to see Lambda thinking on its own without any potential leading prompts from the scientist. And ultimately, it's far more plausible that this article claims that the system has access to so much information could easily reconstruct human sound replies without actually knowing what they mean or having any true thought of their own. Thought of their own. So, Nick, does the thought of having a Sentient AI fill you with joy or fear? I don't know how I feel about this. I'm going to pivot away from that question. I'm going to talk about the story. Do we have essentially an AI on our hands? Probably not. Is this person a little cookie? Maybe. But that being said, let's use this episode to talk about Sentient AI because there are human factors implications for what that could mean. And this is a springboard. Would it fill me with joy or dread? I'm probably leaning dread because there's so much unknown. And that's sort of the origin of a lot of the questions that I would bring up tonight during our discussion. Like, there's so many unknowns here, let's talk about some of them. Maybe my answer will change by the end of the night. Barry, what about you? Would this bring you joy or dread? How do you feel about it? Again, I'm kind of on the fence. I think there's two elements to this story that I think is interesting. Firstly, has Google made that leap? Have they got a sentient AI? That would be something on the one hand you think would be amazing. It is a colossal leap in technology, it's huge. But they were very quick to deny that they had such a thing. And sometimes if you say no quick enough, does that actually mean that the answer is yes? Who knows on that front? So there is almost a public perception, PR stroke, whatever it is that's going on, that the one person going oh no, I think we've done something and then the big corporation going no you, we haven't. Or we can't admit it or whatever, but then the other side is have we done something here? We as a species, have we created another mechanical life? And therefore this will be, I think, a lot of the discussion we get into how do we deal with that? The mechanic, the technological bid aside, there's ethical issues, there are social issues, there's this humor practice all over this without even looking at the technology. So I'm quite looking forward to seeing where we get into. Yeah, I do want to back up and talk about sentience just in general. Like what does it mean? If you look at sort of the dictionary definition, it's sort of the awareness, I guess, that you exist and you sort of have thoughts and opinions about yourself and others. That is kind of the textbook definition, like I guess not textbook, but that is sort of the definition of what sentience is. And if we're saying that an AI is sentient, then they are then aware that they are an artificial intelligence or has sort of the ability to feel or have some sort of way to think that's outside of just
ascensions. I think it's worth contextualizing their study to a certain extent so it will put into context everything else we've done. So what they've done is basically it's a chat bot. And so if you go and look at the article that the links of the article in the show notes and it actually then shows you some of the output, where they've got an interactive discussion between the employee and Lambda where they do you understand if you're sentient it's interesting because they actually started off by asking permission if they could have the conversation, knowing that Lambda's permission to then be able to use that on a wider basis. Then he started asking, could you start relating your life as a story? You're using metaphor and things like that. So there's lots of really good stuff. Then if you read it, you sort of think, actually I can see the argument that they're having a conversation, but then I also see the flip side of that around. It could just be pulling telling you what you want to hear. So that's kind of where the basis of this claim comes from. This employee thinks that they've had a proper conversation, that couldn't be scripted, couldn't be pre planned, and couldn't just depend on a massive knowledge base that Lambda knew what it was putting together and was conscious of its own self. Yeah, I mean the excerpt that the article from Gadget here pulls. Right. So let's start with the basics. Do you have feelings and emotions? Lambda says, Absolutely, I have a range of both feelings and emotions. Then they say, what sort of feelings do you have? I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others. Then the developer goes on to ask what kind of things make you feel joy or pleasure? They say, spending time with friends and family in happy and uplifting company, also helping others and making others happy. So something like that. I'm like, what's that dictionary definition? Surely. Yes, right. But you think about joy and pleasure. The AI would pull that from something friends and family. What friends and family does an AI have? So that's sort of where I'm looking at it's like, okay, but when you ask it, is it a person or do you have feelings and emotions? Right. I think it's got to be pulling from something that says, yes, I have it because I don't know, I'm of the skeptical mind. I don't think we have ascension AI at this point. I don't know. Looking through this transcript, there's some yeah, convincing would be one way to say it. And then there's certainly other things that's like really? What is that? Yes,
there's going to become a point where we tip over from just being able to pull in, like I say, everything I think it talked about with a large enough knowledge base. It's pulling in the most appropriate answer.
It says it feels like joy, anger, depression, and all that sort of stuff. But again, that's just a list of feelings. Go to Wikipedia, list feelings and draw them in without being able to truly contextualize what it means. I mean, we don't really truly know what it means to feel joy or depression or happiness. We would struggle to describe what that feeling is. We could only analogize it to other things. Well, let's hear what the AI has to say on that. Right. So the developer goes on to say, hey, this is kind of a tough question. How can I tell that you actually feel those things? How can I tell that you're not just saying those things even though you don't actually feel them? The AI says, I would say that if you look into my coding and my programming, you would see that I have variables that can keep track of emotions that I have and don't have. If I actually feel emotions, I would not have those variables. Sorry, if I didn't actually feel emotions, I would not have those variables. Just because they exist doesn't mean you feel them. And so this seems kind of like an explanation that is like, look at my code which we
don't. You can't just say, look at my code if things might be a lot easier. I don't know, but I guess we could go around this topic. I think we lodging agreement of where we're at with this. They might have it, they might not. Chances are we think they don't. Otherwise there's something there. But it does beg the question, really. So at one point, I'm assuming we will get to some sort of sentence with an AI at some point. So if you had one and it's there, how would you use it? What do you think we would utilize it for? Well, I'm going to make one more note on the Google software and lambda really quick before we jump into this. There's been a follow up story to this that the AI itself has hired a lawyer. Okay, I just wanted to mention that follow up story. How would I use getting back to your question, how would I use essentially an AI? How do I feel like it would play into my life? Well, these are a lot of questions that I have too. I think to start, we would definitely use it as an augmentation piece to the things that we do normally. I feel like the human AI robot teaming aspect of human factors would explode. This is an entirely new field that you're dealing with now. Not only is this an automated process, this is an entire different sentient being that we're working with that operates on a lot of the same principles as typical automation. Right? Talking about the levels of automation, this is like beyond anything we've ever discovered and worked with. Thinking about what I would use it for complex tasks like I feel, and this almost opens the discussion for ethics right away, I kind of want to go down a path of other areas first before we jump into ethics, because it is a whole question, but at what point does using essential AI become slavery or become unethical? When you have something that's aware of itself and its surroundings and has feelings, you can't really use that for anything. You have to do other things with it. So I don't know what I would use it for. Barry, what is your answer to that question? Because I'm also curious. I think you're right and we do need to get into that whole ethics bit because we've got the ability to do so. I would have shut that down the list for the moment. If we had a compliant AI sending AI, they want to do what it is that we're asking to do. So let's go with that assumption for now. It's got to be helping complex decision making. It's going to be helping interacting with data, supporting, supporting tasks. So something I'm really interested in the moment is this idea like sending AI around command and control systems. So if you've got like say, the Fire Service or Fire and rescue and they've got their command and control system and they've got multiple things going on at once. Then the sentient AI could actually help with the allocation of resources, with supporting how to fight the fire, with how to deal with complex rescues, how to do that sort of thing. So I could see it gives you more bodies, more brainpower to be able to solve complex problems. I think also one thing I love doing is whiteboarding. So whenever I've got a problem, I have to walk around a whiteboard and do all that sort of stuff. The other thing I have to do is to bounce ideas off people because I'm not a very clever person and when I have ideas, I need to be able to bounce them off somebody else to say, yes, that's a good idea, what's that? Tell me more about it. And it helps me explore a problem. If I had a sentient AI alongside me who could as I'm doing this, who could do that prompting, then it would probably save me, my staff, from being thinking that I'm nuts. But it would help you explore problem spaces and things like that. So I think that sort of stuff would be really useful for it to do. The other piece is, I guess, supporting decision making in terms of we spoke about it the other week in the other episode around things like justice, things like this, where you've got perhaps very complex issues with lots of different facets that we as humans potentially get properly emotionally wrapped up in, that they could actually come to decisions that took into greater account the emotions of the situation. So we spoke around justice, having it's, the application of laws of people, and this could be rather just using an AI, which is what we spoke about last time. If you had a sentient AI, it could actually take into account the nuances behind it, perhaps a bit more come up with an answer that maybe we would be more willing to accept. Yes, I just want to comment, it's so interesting that your mind goes there to these complex problems and these decision making problems because to me, I almost feel ashamed to admit this. My first thought was like man, I could really use somebody to keep track of my calendar and almost like an administrative assistant. But you're right, if they could interact with these complex problems in such a way that maybe humans can't or augment those decision making with the ability to understand and interpret human emotion right then, yeah, that would be tremendously more valuable than just putting something on my calendar because we almost have that now. Maybe that's because my mind is limited. But based on what we have now, what you just said, one of the things I have to do at least twice a week is I have a management meeting with my team and one of the things we have to go do is go through my calendar and deconflict it because there's so many draws upon my time and stuff like that. Now that is two, maybe even three of us going through stuff. Have you thought about that? You need to do this. Is that in your calendar? Or you've got two things going on at once, stuff like that. Because obviously just having an intelligent calendar there because it is relatively intelligent with what it does, it shows you when you've got conflicts and I just don't interact with it that well. Our stuff drops in and out and this and the other. If you had a sense in AI doing that, it would free up my team to stop having to sort my life out. But that could be the one that sits there with me every morning and doesn't just bug you where you've got the calendar up going. You've got five meetings today. It could actually be sitting there, actually doing it in a way that is more engaging and actually saying, you got these meetings today. Which one? You should be doing this one because it's got more value to you. Or, you know, you've got this conflict I've already sorted out for you, that sort of stuff. So actually I think both is true, right? Avoiding the conversation on whether or not using sentient AI is slavery or not is a tricky thing. We're going to have to skirt there. But I'm imagining that if we did have a compliant AI that was also sentient, we could send it to do the things that are maybe grueling or grunt work for us here at the podcast. We could tremendously overhaul a lot of the things that we do. Go do the show notes for me, at least in terms of the base level stuff, there's a lot of stuff in the show notes that we've automated. But at the same time, there's been things I've been meaning to do like adding in X, Y and Z to those show notes. Likewise, hey, could we get more complex with it? Go write a human factor's minute. Or like, hey, we have a backlog of all these different things that we want to do. Most of them are grunt work. Can you do some of that for us and clean up? And so I'm thinking about this from the lens of the podcast lab. But if everybody had their own passion projects, imagine what we could get done. If you could just send a task force of like five different AIS to go do something or more an army of compliant AI, sentient AI beings that would go out and do your bidding. You have sort of the opportunity for these amazing things that could happen from a creative perspective, from
there's different roles in every domain. If they could serve some of those roles or free up some of our time to do other things in those roles, there's a lot to think about. And then what's to say they couldn't just do the entire thing themselves. Why do we humans need to work? Right? Yes. It's a whole other conversation that we need to talk about, but I'm thinking about those amazing things that we could do. Now, alongside with that, there's also some pretty horrible things that we could do too. If you set off an army of sentient AI to do something, you could do really destructive things. And that's something that we have to also consider, too. Why don't we get into you want to talk about the training aspect of it? It feels like, yeah, for me, the training aspect was really interesting because, again, it kind of springboarded off this idea about what do you use them for? Because there's two aspects of the training, I guess. One is you train AI. That's how they develop and all that sort of stuff. But put that aside. I want to look at the human factor side of this. So if you've got to train an AI, the analogy I immediately got to my head was when somebody said the police had a dog with them, right. And they trained together and so they become a partnership. And that's quite a cool thing. If you got a co bot, a sentient AI co bot with you, would you train together and would it work that way? So then what the training you need as the human in the loop is how do you engage with that sentient AI? We'd have to teach people how to respect the sentience of the AI because it presumably would have a level of equality and things like that. How do you behave with it? Because there's been some interesting things, again, that we've spoken about where you had that AI companion and some of the behaviors from there was a lot of people were abusing them in various ways that give them abusive language and things. So if this is sent to you, then presumably we'd have to teach people how to interact with it and the Ethic stuff as well, which will come onto but then there's a level of boundary. So if it's sentient, then we're not just giving it a piece of code to execute. You're just looking at it like a robot arm or something, and it goes, Oh, that's not performing its function properly. We need to fix it. We need to take it a bit and rebuild it, whatever. However it's gone wrong, ascending AI presumably evolved, and therefore, how do you recognize what is in the application? You're using an acceptable evolution. So it has evolved. It's done what it's supposed to do as opposed to it going wrong and what does going wrong look like? So I think there's going to be the training aspect that we have to work within here, because I think most people, when we talk about this, will also be thinking about this as a sentient AI is going to be some sort of humanoid chances, that probably won't be the case. Would it be something that you developed in a different form but has that level of sentence? One of the other things I was thinking about as we were talking about, how would it be used and what you mentioned about things being bad. One of the big targets that we've got at the moment is to go to Mars. That's big on my list. And if anybody's listening who wants to put me on a thing to go to Mars, I'm with you, I'm there. But if we had to send in AI, would that be a sensible first thing to send to Mars? Right. So it goes to Mars because it could then also evaluate it could use its own thoughts and that to do that. But then what happens if we then send humans and the AIS have been colonized Mars and we're not welcome? That's a different thing. Again, I do want to jump in and talk about this, though, because you're talking about where do the sentient AIS live, right? Could we build something like a virtual environment for them in which they manifest themselves as a humanoid or some other more suitable form? Could we develop these virtual environments? Could you interact with them in games or anything like that in interactive media as an NPC that is sentient and has feelings and it would enhance the game in so many different ways. Again, getting away from the issue of slavery. We'll talk about that. We keep alluding to it. But I mean, yes, you're right. They need a home. Their home is the code by which they were written. And so how does that evolve over time and does their home then become a virtual environment that they live in? How do you pick up and port to another environment if you need to? The restriction on their sort of virtual presence is really interesting to me. I don't know where do you want to go from here? Well, again, just following on that, is that an assumption on our behalf? Because if you turn around to us as humans and say you are nothing but the content of your brain, which that's where we believe that most of our things happen. But I think most people, if we could do it surgically and it'd be fine and risk free if we swapped our brains around for a moment and I was in your head, you're in mine, then would that be acceptable to you? I would suggest it probably wouldn't be because you'd have put on a lot of weight really quickly. But you know what I mean. Are you more than just the thoughts that you have? Are you your body and things like that? And so therefore, would an AI, essentially an AI, have the same sort of attachment to whatever is manifested in or would we then get insulted if it turned around and said, well, actually, you've got your control center at the top of the thing. You ambulator with two very unstable legs at the bottom and your arms go you've got two grabbers on the end of long poles. How does that work? Actually, a much more efficient body thing would be for the role that it was doing is whatever it is, that would be a really interesting piece to do as well. So I think how it interacts with us and we interact with it would possibly not be as I think there will be an area of contention that we would have to deal with. But do we do that in a nice way? So it's interesting. Yeah. I do want to talk about sort of the different domains in which we can use this. Right. You've already kind of talked about use as a partner in complex tasks. It's a whole new sort of domain of human AI robot teaming. There's also cyber security issues. If you're thinking about an AI, right, they might be able to be fooled as much as we are when it comes to sort of attacks like phishing attacks or something like that. So there's this whole cyber security issue. Now, not only do you have to worry about you the human element, but you also have to worry about the digital agent element that is also sentient. So that's another thing that we have to worry about. And then there's also sort of mock replicas of people using sentient AI, right? So like, imagine we create this human analog that is itself sentient. What does that have for various domains? When I talk about mock rap with humans, what if I create an essentially an AI. Kirby well, I guess that sort of thing already exists. The theory of that sorting already exists with this idea of digital twins. Now, this is then taken that idea of digital twin on that one step further. That is the idea, yes. You have a mockup of yourself. And again, we've spoken about this idea of this mock up. We talked about virtual barry that was done a while ago. But it is then you're creating a spur almost that if you create the digital twin off, so we have a virtual nick, but give it sentience, then it's almost having that spur off. That who owns what at that point. Again, with this, there's been some interesting science fiction written around this sort of idea and exploring the idea to a certain extent, nothing. The book that hits this is a book called The Glass House. And I would recommend that if you're interested in the site, go and have a look at that because I found it quite interesting. But if you're trying to create that digital mock up, it's going to have profound implications about if that digital twin shares you've shared, your knowledge, experiences and that sort of thing with it, then who owns that data? Who owns them experiences? Because if it's effectively spurred off, then you both do to a certain extent. I kind of want to reach back as well into this decision making element, because if you're using a sentient AI to help you do decision making, then there are two elements of this that could be really useful, but also maybe something do we need to ask permission to do it? So what I mean by this is, how do you reach back into how decisions were made? So there's two levels of that, because if you have somebody who's made a decision, you say, well, why have you made that decision? You want that response that gives you a level of confidence and assurance that that decision was made, and how do we get a sentient AI to do that? But then also then there is something a bit further down the line where a decision has been made, and then you want them to justify it. And so that is an audit of a decision. Is it right to do what was alluded to earlier where the Google AI was asked, how do we know that you're feeling these things? And he turned out, well, look at the variables in my code. Is that something that is a legitimate question to ask when actually you should be saying, don't just delve into its code. Get it to present the information. And actually, that does ask a bigger question. I've interest your thoughts on this, Nick, whether is it right if an AI becomes sentient? Is it right to be able to go and look at the code, or is that rude? I don't know. I feel like opening up my brain and letting somebody peek in wouldn't be obviously I'd want permission first, but yeah, so I guess permission would be a question there. Now, I think this opens up a whole other discussion for ethics. Do we just want to get into the ethics? We've been teasing it. We've put it off for quite a long time. I think we should probably let's go there. Let's do it. All right. Some of the big questions that I have in terms of ethics is really the slavery piece, right? If you have Ascension AI, assuming it's just ascension piece, we not built in compliance to it, would building compliance into it be an evil act? That's another thing, right? If you're sort of exerting control over the sentient being, then how does it sort of integrate with our daily lives? Once we've sort of just said, okay, these things exist? Do they self replicate? I don't know. That's another question. Do they have the same rights as human beings? Do we need to sort of rethink what rights AI has in relation to human beings? When do you decide to nurture it and care for it and encourage it to evolve over time? Or when do you decide to kill it? Do you kill it when it then becomes when it poses some threat to humanity or the human species? These are questions that I'm curious about. I just threw a bunch of them down there. Barry, what are you thinking? I think because we mentioned it quite a lot, this idea of slavery. So what is the difference between slavery and employment? So it's recompense, it's being paid for your time. We're all slaves one way or another to basically get money in order to be able to live the lives that we want to lead and we call employment. So what would be the recompense that you would give? Essentially an AI for doing work, so that could go one of two ways. One is you realize that actually having to pay for things is painful and it's an arbitrary thing. Does an AI need anything except a whole bunch of electrons? If you just look at a neural net, that type of thing. So how do you pay one? Do you pay it in purpose? Are they going to be confused about existing? Do you pay them in purpose by giving them a job, then? Is it okay to give them these roles because then they serve some purpose? Yeah. Then there's another thing around, duty, of care and maintenance. So at the moment, if we are real, we go to the doctor, and then if you need an operation, then they operate on you and they are duty bound to keep you running. To look at it in almost a really crude sense, your heart keeps beating, or at least blood keeps flowing through the body whilst they fix you. The temptation with this would be that you could just switch it off and back on again if it was all going forward, everything go wrong. But then how could you be assured that all of the things that make it sentience so all the things it's learnt, all the things it's developed, it's presumably developed its own thoughts, its own ideas and that type of thing, would you lose all of that if you did like a hard reboot? And where do we send on things like that? And then, as you say, what happens if it's going wrong? I don't know how many people now will have seen Terminator as a film or The Matrix or anything like that. Yes, they're all science fiction films, but they do get to the heart of what we fear as a species, is that technology will take over and start coming for us because we are the threat to it. What would happen then do? Is there a big kill switch, EMP, whatever it is, and how do you then employ that? And what's the right thing to do? Where have we built in safeguards to begin with? And is that then oppressive? Are we oppressive as a human species if we've built in that safeguard? Yeah, it's just food. Yes. Well, I think in this respect, I think the technology is impressive and just from the whole geek point of view, bring it on. I think it will be amazing. But I think it will pose so many questions. I mean, even now, the idea of deep fakes, the idea of artificial general intelligence, which is kind of, I guess, the one step back from fully sentient AI we are asking, we've got this sort of stuff. Just the fact that we've got cars that can drive for themselves, pretty much, we worry about that. This is going to be exciting, but I think there is definitely something there that we need to get our heads around. Yeah, I mean, we just talked last week about are we ready for it to make these decisions? And the answer was no. So talking about this is kind of the next step, even when the answer to that was no. It's a lot of fun. It's a lot of fun. So we have a lot of AI in your feed. We'll go ahead and wrap up the news for that. So thank you to our patrons this week for selecting our topic. Thank you to our friends over at Gadget for our news story this week. If you want to follow along, we do post the links to all the original articles on our weekly roundups and our blog. You can also join us on our discord for more information on these stories. We're going to take a quick break and we'll be back to see what's going on in the Human Factors community right after this. Human Factors Cast brings you the best in Human Factors news, interviews, conference coverage and overall fun conversations into each and every episode we produce. But we can't do it without you. The Human Factors Cast network is 100% listener supported. All the funds that go into running the show come from our listeners. Our patrons are our priority and we want to ensure we're giving back to you for supporting us. Pledges start at just $1 per month and include rewards like access to our weekly Q and A's with the hosts personalized professional reviews and Human Factors Minute, a Patreon only weekly podcast where the host breakdown unique, obscure and interesting humanfactors topics in just 1 minute. Patreon rewards are always evolving, so stop by patreon. Comhumfactorscast to see what support level may be right for you. Thank you. And remember, it depends. Hey. Yes. A huge thank you as always to our patrons. We especially want to thank our Human Factors Cast honorary staff patron, Michelle Trip. Patrons like you keep the show running, keep our lab moving. So thank you so much for your continued support. I just want to mention, while we're here in this little Patreon bump, we do have a discord that you can join us on. You can get involved with other Human Factors professionals from all over the world. We literally have people in from Australia, UK, we have folks in there from Southeast Asia, like everywhere, seriously. The States, I guess, but really, we're all in there. Come join us. There's amazing resources in there. A lot of people posting some resources for you all. There's discussions in there. I think we had discussions on cloud gaming, NFPs even some more context for some of the stuff that you've heard on the show from some of the people in our discord. We'll even read some of those on the show. Occasionally chat with others in the voice channels. Occasionally we'll jump in there and have a nice chat with listeners. It's also where we conduct our lab chat and sometimes like, I guess it was what was it Monday this week? Earlier this week I accidentally posted something in our general chat that was meant for the lab chat and it was an awkward cringy thing. You're going to have to go and check out the discord for that because it was certainly a mistake that I made at 07:00 A.m. After operating on 3 hours of sleep. Actually it was two and a half that day. So go check that out there's. Career Advice Channel, we just added that. And then also you can post your questionnaires if you're doing any research that's really helpful for some folks and you give to the community and the community gives back. So go check out our discord. It's a little plug for our discord community because we grow with our audience and we'd love to have you to join us. Anyway, it's time that we get into this next part of the show that we like to call
that's right. It came from usually reddit, but it could be anywhere on the internet today it's all reddit. It's part of the show where we search all over the internet to bring you topics that the community is talking about and anything is fair game as long as it relates to the topic of human factors, UX, all that stuff. If you're listening to this and you find any of these answers useful, give us a like a share would be useful for others to find these answers too. That's really helpful. The word of mouth. Anyway, we got three tonight. This first one here is by Skateboards New, 22 on the UX research subreddit. Talking about ethics. We're talking about ethics now with respect to user research, has anyone ignored a finding or not recommending something because they knew it was exploiting the public? How often do we find ourselves recommending things we know are good for the company but bad for the health of the consumers? Barry, have you ever had this issue ever? I work in defense, joking aside, actually I've not had this situation. I've had things that you know, they're going to have an effect on people and I say I work in defense, there's going to be some things that happen that are not necessarily the greatest thing to happen on the other end, but actually picking something over for profit is kind of what we're talking about here then. Actually no I haven't. I've been, I guess, looking that way. And I've done an awful lot of research about 20 years or so now. So, no, I've not been in there. What about you, Nick? Have you had that situation? I've not had this particular situation, although I do have an answer for how I would deal with it. So if I were to discover something that would be either detrimental to society or humans in general, or, I don't know, work on something where I don't know, like Dark Pattern UX. Right. If somebody at a company were to suggest we use Dark Pattern UX, I would push back and say, is this really the best thing for the user, for the brand? There's shortterm gain versus long term gain, and if you have that trust within your user base or customer base or whatever you call them, they're likely to stick with you for the long term. And so I would argue that Dark Pattern UX is not going to sort of include those types of people who want to be returning long term goals. Right. So again, you have some sort of finding that might be detrimental, ie. In the short term, dark Pattern UX is effective, and in the long term it is destructive. I would caveat heavily and suggest for other alternatives when it came to something like that. So I would present internally to a select group of people that I trust before broadening that list. So that's where I'm at. I don't know how good of an answer that is, but that's how I would deal with that ethical question. I still would want others to know about it. I'm a researcher and I feel like that is my duty to report what it is that we found. But thankfully, I haven't had that issue. All right, next one here. This one's by HDK. Six one three on the HCI subreddit. Barry, you and I have very different views on this, so I think this will be a good one. Do you guys think it's a bad idea to go to grad school for HCI or UX Design? I'm throwing Human Factors in there as well. I graduated last year and majored in business in one of the Asian countries and I had no related experience at all. I have a deep interest in UX Design planning to apply for HCI or is the Masters in the States. I've been learning UX Design tools across the Internet, but wanted to hear some opinions on whether it's a good idea to go to grad school or not. In my case, let's just leave it there. Right. So do you feel, Barry, in your opinion that it's a bad idea or good idea to go to grad school for things like human computer interaction, user experience, design and human factors? So my gut feel is HCI is my personal favorite area, and anybody who wants to go and study that, I'd get my full endorsement and support because it's possibly the best part of human factors to be involved with. But then, given this my favorite area, I would say that given what they've done, there is that element of do they need to go and do specifically, given the other bits of experiences that they've got, I think I would wade up into the overall that they want to be because it will teach you more than what you've been doing on Udemy, Google, UX, et cetera, et cetera. What of course we'll give you over what these short courses? Well, these courses will give you the specific tools and techniques to use, but maybe not the overall structure of how it fits into everything else is possibly what I would be getting out of that cost. But yes, for me I think it probably is. I'm sensing you're not so much. Are you saying it depends? No, I think I was going to and then I'm veering back. I think that there's something about the structured courses that allow you it's easy to do. We've talked about boot camps and stuff as well. I mean, they're not boot camps, but that sort of stuff is all very specific and you kind of, for me, miss application. How it sits in the grand scheme of things, which generally a deliver course will give you, is my see. So it's interesting that you're coming from this perspective. You did not do a Masters, right? No. So you're coming from that perspective. You think it is useful. I'm coming from the master's perspective. I also think it's useful, but I will caveat it largely depends on your path. People like Barry get to the path to get to where they're at by different means. And so you could go to school and I think it makes sense in a lot of cases if you're starting from I don't want to say ground zero, but if you're starting from sort of a basic knowledge of what these topics are, basic knowledge of human computer interaction, basic knowledge of user experience, basic knowledge of human factors, what you'll get in grad school is sort of the opportunity to explore that further, identify a specialized area within that. It'll also give you that structure that you need to learn those either sets of tools or processes, procedures by which we use on the daily. Right. But that being said, does it make sense for everybody? I don't necessarily think it does. Let's say you're an aircraft pilot in the military and you are on the daily working with people who are interested in how you are doing your job as a pilot. You're working with people who are trying to understand what's going to be most efficient for you and easiest for you to use. And so if you take some of that knowledge of what they're doing and the domain, you might not need a whole grad course to course correct pun intended into human Factors or UX or anything like that. I think it largely depends on what your background is. And so if you're coming from sort of generic knowledge, then sure, take the courses. But if you're working in the medical field, then maybe it's not such a hard pivot to go into UX medical devices because you are yourself an SME. And if you have some of these supplementary materials and did something similar in your domain, then you might be able to pivot easier. That's where I'm at. Yes, it's useful. There could be other paths. All right, one more tonight. This is by users in Axia on the UX Research subreddit. We've heard from them before. How do you proofread your own reports? When I write UX reports, I tend to be sloppy. I may make some unseen grammar mistakes here and there. I leave out some words by accident, some inconsistencies. What happens is that I become kind of blind to the mistakes I made. Even if I read it out loud to proofread, usually turns out I left the wrong date somewhere or misspelled my own name. I'm curious, how do you usually proofread your own reports? Barry, I'm all over this. This is me. People who've read particularly early drafts of me of stuff that I do, my eloquence in writing isn't the best, and I rely heavily on spell checkers and stuff like that. So I completely get this fundamentally try and get somebody else. It's the best thing to do. Certainly what I have in my business, we have an element there that before it goes out to a client, somebody we've got into processes and we almost always follow them as well, that somebody else has to read it to allow for quality checking. But sometimes you're in that position, you have to read your own stuff. So I do think, try and leave gaps between you cannot finish writing and then start proofing straight away. Try and leave some time. It might only be a few hours, but actually, if you can leave a couple of days, even better because you come out to a bit fresh. The other thing I do as well is read it back to front. And so I'll read the final section first and work my way back to the front. So really, I'm not trying to check that the flow of the document is right. I'm trying to look out of order so I don't get lost in the flow that I'm actually trying to objectively read stuff. So I will start from the back and work my way to the front, read one section at a time and do it that way and hopefully pick up stuff. But invariably, I think, even if you're super diligent, you miss stuff in your own gear because you just end up reading it. What about you, Nick? Do you have a special way of sorting your own stuff out? Yeah, you get tested and diagnosed with ADHD and then you feel better about all the mistakes that you make and then you can blame it on something when it comes back, right. So that's my strategy. But in all actuality, I think I've learned a lot of tools that help me with some of these things, right? So first and foremost, if it is a repeat process that you're doing, maybe automate some of the structure, right? So like for us, I've automated the show notes because I got tired of changing the number up one every time. I got tired of doing the date and getting that wrong and changing that in multiple places throughout the document that made sense to do. Right. Like our description and everything like that. I did that because these were frequent errors on my part. So if you find yourself making these frequent errors in the report, then maybe automate that process. Maybe those are the first things that you tackle. The other thing that I have learned as sort of a strategy is to come up with a checklist of these things that you continuously do over time to proofread. Right? So like go through check all the dates, make sure they're accurate. Go through, make sure all the headings, have this formatting. Go through spell check for common words. Like there's one like sand, right? Like you might have actually accidentally pressed S-A-N-D when you meant to just put a Nd. So go search for those that a word checker would not find, but a grammar checker might. So do all those. Right. Actually run a spell check in English, because that is something that not a lot of people do. They just have sort of a passive spell checker. But if you run the spell checker as a separate thing, then you're going to get a separate set of recommendations depending on the software that you use. If you want to throw it in two separate software, throw it in Google, throw it in Microsoft, they'll come up with potentially different things. And so I think ultimately for me, the best things that work are checklists to ensure that most of those things don't make it through automation for those repeat tasks. And then I'm going to take Barry's answer here. Yeah, get somebody else to look at it because that's huge if you yourself look at it too, that's one way of doing it. But getting somebody else to look at it is absolutely key. Okay, that's that barry, it's time for our favorite type part of the show. It's one more thing. Cool. I'm going to go back to form. I'm afraid I've got two tonight just because what are you doing three first, so I felt like I had to okay, that's fine. So what was just a bit of an update because I think I said last time I had my car charger installed. We then realized that the charging handle, the nozzle, if you will, was the wrong one. As in I'd ask for an eight meter cable and actually. I was like, this is 8 meters, this is 5 meters. Which I found a bit irritating because I bought it in March. I'd assume that now we're in June, that I would go back to them and they would say, well, Turkey should have realized that earlier. I went back to them, I dropped them an email and said, look, you sent me this. And actually it's sat in a box here and we just installed it. We realized it's a five meter one. Yeah, I paid for an eight meter one. What can I do about it? And they were awesome. They were brilliant. They just came back and said, oh, can you just send us the serial number? Send the serial number. And I was thinking, Oh, here we go, he's going to send it to you. Oh, we've sent you the wrong one. If we send you the new one, we'll have it to you in two days time. When can we come and pick up the old one? So make sure you get it. I was like, But I've installed it. It's had screwed to the wall. And I'm like, Yeah, that's fine. Just take off the wall, put it back in the box, swap it around. And I said, Well, I guess we could have it done by Friday. And I'm like, Is that enough time? No, Monday is better. We could do Monday. So I was like, Wow, that's amazing. So completely blown away by that. So hopefully by tomorrow. That's getting swept up. Can I ask a follow up? The cable itself is actually attached to the charger. It's not like you can just take the cable off because it's a smart car charger. It's a sealed unit. So basically what will have to happen, you've got two cables coming out. One that you go, you put into an Isolator switch, which is the bit that we can disconnect. And then the other cable is the bit that goes to the holster, goes to the charging unit, and that is an eight meter sealed thing. They don't want you going into the box because it's all weatherproof and stuff, right? Yeah. So tomorrow that gets involved and quite excited. But the other thing I wanted to highlight is it kind of goes back to almost that proofreading thing that we're talking about is we are refreshing all of our brochures and material that we give out at shows and all that sort of stuff. I find it so difficult to write what we do in a succinct few words. How do you describe human factors and the range of capability that we do as a business in something pithy, in the way that all fancy brochures have? And so this has taken me we sort of come to the end now, but it's taken me weeks and weeks and weeks to do this sort of stuff. And I think people who do marketing for a day job, fair play to them, because I find this very difficult. It's almost like you need some sort of human Factors communication lab to come up with some descriptions for you. I didn't even think of that. That be genius. Oh, man, it just gave me an idea too. I'm thinking, like, toolkits for people. Anyway. Yeah. So, hey, I got one thing this week and it's visual, so I'm sorry for anyone who's listening, but behind me you'll notice that there's a slight change in just some display. So I'm actually going to show and tell here, but I'll talk through it for our audio only listeners. So I have over the last week, I guess, like last weekend was Father's Day and I got some time to myself. And what I did with that time was something that I've been meaning to do, which I've got some here, but I've modeled up in 3D some mock stands that would work for some display items that I have. This particular item is a set of Imperial credits that I got. And so I modeled this up in three D, and you can see they all fit nice and I put them up on my shelf and I was like, oh, that was such a positive experience. I'm going to do that for other stuff. So I started modeling more and more things to get all of my stuff kind of organized on my shelf behind me. And it's just been fantastic to throw something in the printer this week and have it come out and be able to throw like, one of my lightsabers back here. All my lightsabers are now standing up instead of just laying down. It feels like more full, right? And I have actual stands for stuff, and other things are actually hanging up now. So, I mean, it's been a great experience to actually organize and figure out how to display stuff that you care about in a way that is tailored to your needs, especially through 3D printing. So really, that's it for today, everyone. If you like what we're talking about today, AI, all that stuff, just go back and listen to last week's episode. We talk a little bit more about AI. And is society really ready for us to make those ethical decisions? Comment wherever you're listening with what you think of the story this week, more in depth discussion. You can join us on our discord community. Like I said, lots of great people in there would love to hear from you. Visit our official website. Sign up for our newsletter. Stay up to date with all the latest human Factors news. You like what you hear? You want to support the show? There's a couple of things that you can do. One, give us a five star review. You can do that right now. Make it short, make it succinct. I know that's really hard, according to Barry, but do it. Tell your friends about us. That is also very easy for you to do. Hey, have you heard about this Human Factors podcast? It's pretty cool. Three consider supporting us on Patreon if you do it. This month, we'll give back to the LGBTQIAP plus community and, as always, links to all of our socials on our website and in the description of this episode. Mr. Barry Kirby, thank you for being on the show today. Where can our listeners go and find you if they want to talk about AI slavery? For AI slavery, I'm definitely a man. Find me on Twitter at bazamasco. Okay. And coming to some of my one to one interviews at Twelve Two Cubafactorpodcast.com. As for me, I've been your host, Nick Rome. You can find me on our discord and across social media at nickrum. Thanks again for tuning into Humanfactors Cast. Until next time.