Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
June 2, 2023

E285 - Tech Placebos: All Hype, No Help

On this week's episode, we discuss the dangers of blind trust in enhancement technologies, and the risks that can come with using them. We also take some time to answer questions submitted by our community, including topics like whether product management is cannibalizing UXR, whether it's worth taking a job on a dysfunctional project, and how pro bono work can impact job searches. Tune in for an insightful discussion!



#BlindTrust #EnhancementTechnologies #ProductManagement #UXResearch #DysfunctionalProjects #ContractorWork #ProBono #JobSearching #CommunityQuestions

Recorded live on June 1st, 2023, hosted by Nick Roome with Barry Kirby and others.

Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on Human Factors Integration - An interview with Trevor Dobbins:

News:

It Came From:

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

Transcript

[00:00:00] Nick Roome: Hello, everybody. This is episode 285. We are recording this episode live on June 1st, 2023. This is Human Factors Cast. I'm your host, Nick Rome. I'm joined today by Mr. Barry Kirby. Hi, Nick. How you doing? You know what, Barry, I'm not doing too well because for the second week in a row we don't have fun names.

 

So come back, whoever you were leaving fun names in the show notes. We you just can't get

 

[00:00:22] Barry Kirby: the anonymous stuff nowadays, can't you?

 

[00:00:25] Nick Roome: No. Hey, we do have a great show for you all tonight. We're gonna be discussing how Blind Trust in enhancement technologies, or augmentation technologies, whatever you wanna call it, encourages risk taking, even if the tech is a sham.

 

Later on, we'll be answering some questions from the human factors community, such as product management. Is it cannibalizing UX research? Should you take money for a dysfunctional project and doing pro, pro bono work while job searching? But first we have some programming notes. You know what, I usually ask these for these at the end of the show, but if you enjoy the type of stuff that we do, If you want to help support the show, there's a couple things that you can do.

 

Wherever you're at right now, you can leave us a five star review that really helps other people find the show. You can always tell your friends about the show. So tell your friends around the water cooler about the show, help it grow that way. And three, if you have the financial means to, you want to help support the show, there's a Patreon, we have it.

 

You can support us. Just a bucket gets you in the door. Anything helps. I usually ask that at the end, but. I figured I'd throw it here at the top of the show. Barry. I do have to know though, what's going on with the latest over at 1202. So at 1202,

 

[00:01:28] Barry Kirby: we've still got the interview with Ben Peachy up, and that is doing really well in, in terms of people really wanting to understand their insights about what make, what drives him and what is inspiring his leadership of the Chinese Institute of Ergonomics and Human Factors in his role as chief as c e o.

 

And then we've got a new one coming out next week, but I shall leave the undercovers until it goes live. Ooh,

 

[00:01:50] Nick Roome: I'm excited. That was quite the tease, Barry. Thank you. All right let's get into it, shall we?

 

That's right. This is the part of the show all about human factors news. Barry, what do we have this week?

 

[00:02:01] Barry Kirby: So this week we are looking at blind trust in enhancement technologies encourages risk taking, even if the tech is a sham. So a recent studies recent study has suggested that people who expect their performance to be enhanced by augmentation technologies such as AI or exoskeletons en engage in riskier decision making.

 

The researchers found that strong belief in improvement based on a fake system can alter their decision making. In the study, participants were led to believe in AI controlled brain computer interface would enhance their cognitive abilities whilst playing the Columbia card task game, when in fact the augmentation provided no real benefit.

 

Almost all of the participants thought the augmentation helped them do better, leading them to make riskier decisions. The hype surrounding these technologies excuse people's expectations and can lead individuals to make dangerous decisions. AI-based technologies that enhance users common in professions like firefighting factory work, and could soon be available to knowledge workers.

 

The placebo effect can make users feel overconfident in these technologies without fully understanding their limits and their benefits to ensure the effectiveness of new technologies beyond the hype, placebo controlled studies are necessary for accurate evaluation and validation To tell the tele apart the snake oil from real innovation profession wide overconfidence could lead to real consequences.

 

So thorough thorough studies like these pr practitioners in human factors can help en ensure that such risks are limited and technologies are understood for their real benefits and risks. So Nick, what are your thoughts on your ai basically making you be feel better than you actually are? Is that giving you more confidence to take more risks in our podcast?

 

[00:03:44] Nick Roome: Yes. Guilt guilty. I won't say how, but I'm guilty of that. Look, this story is interesting on a couple levels, right? This really highlights the need for communication around the capabilities and limitations to manage some of these user expectations. Like when you think about the general population and what they think about chat G P T, for example, or some AI automated system, they might think it's AGI and it's not general and auto, it's not general artificial intelligence.

 

It's a large language model that predicts the next stuff based on previous. And so being very clear about what it is and what it isn't is really important. So I think this highlights that need, especially when you start looking beyond. Ai, when you start looking at things like, we talked about it in the pre-show, exoskeleton technologies, prosthetics implants, those types of things where it feels like it should make you a superhuman in some ways but maybe it doesn't.

 

And that risk taking behavior then comes into play. Obviously this is where human factors plays a huge role. Incorporating that user feedback expectations loop into the design process can help mitigate some of those risks. The other thing is that for me, this is another big call for policy and regulations with AI and and even even beyond ai there's a lot of.

 

There, there can be a lot of consumer or user protections put into place to help with these type of products that are meant to. In some cases, when you mention things like snake oil, deceive the end user into believing that it is it is supposed to work. So that's my thoughts on it. Barry, what, where are your thoughts with everything?

 

Have you collected them?

 

[00:05:26] Barry Kirby: Not really. No. Just wanna think. No I get, I see where they're coming from. There is a whole lot of stuff here around the way that we are perceiving or using, not just AI technologies as a whole. They mention things like exoskeletons and things like that. It'd be quite easy to see, to use that as a real practical example of if you'll use an exoskeleton all of the time to do lifting and things like that.

 

You could easily fuel yourself to think, actually I'm at home now. I could easily lift cause I lift that sort weight to work all the time. I could lift that at home and actually l lend yourself to injury. But fully there is, it's about the amount of trust that you've got in what it is that you are doing.

 

It's about being able to understand where the information is coming from in order to supplement your decision making. But again when we talk about the example that they've given about using a the a card task, one of the things that we've said on this show quite a lot already, that one of the advantages of chat G B T and the other LLMs is the ability to have a third person in the room.

 

The ability to have a, some, something to bounce ideas off, to start you off, to get things going. It's like having that second opinion all of the time. And that's where I really value it because it just allows you to re actively reflect on what you're doing. But I've also seen, cause I've now joined quite a few chati, p t Facebook groups and LinkedIn groups and things like that.

 

And what is worrying is the amount of people who think that it's a it's an endpoint solution that throw something in it and what, it gives you a polished answer. And you can see that in two respect. One is people asking, saying, oh, I want it to do this. How do I make it do it? He's you can't do it.

 

It's it is just a loud language model. It doesn't it doesn't create magic. But then the other one is when they get it to trip up and they're taking screenshots and mocking it and said, oh, look how stupid it is. It is. It doesn't work. It's it's not meant to do that.

 

That's not what it's about. So we have this false idea about what it's meant to be. And really this keys into one of the things you said. It's all about the hype, isn't it? And certainly at the moment, we have this hype around ai, people are just woken up to it. And, but now we've gone so far the other way that it's gonna take over the world, I was driving back from the office and listening to the radio and they were having a phone in show.

 

And literally the un it was about a prediction that'd been made that things like chat, G B T were going to cause human extinction and getting people to ring in about it. And you're like, they hire a professor on, a professor of ai and he was ba he saying what we are saying now, which is take it easy, just chill.

 

It's not, it's gonna take a, it's not gonna kill everybody. It's it talks very cleverly. The one thing he did say, which I disagreed with to a certain extent was the amount of it will, what an l M can do is put out a lot of false information, a lot of fake news and things like that.

 

And I'm like, we've been doing that for years already. That's not new. Yes, it can probably increase volume, but. Humans have been passing out disinformation for eons. So it's not, that's not really a new thing, but in terms of Yeah. The end of the world, the apocalypse there.

 

[00:08:29] Nick Roome: Yet, yeah, I'm, I wanna jump in and talk about that point though, because at least with that there's at least because it talks at a rather sophisticated level compared to perhaps what some people are used to a.

 

Chat program returning back. I think that is the danger when you pair it with that misinformation, right? If it gives you that misinformation and it's telling you confidently that is information and it's telling you in a way that it is creating an argument for that, then are you more likely to engage in a risky behavior because it told you to ingest rocks?

 

[00:09:04] Barry Kirby: Oh, absolutely. I'm not saying that it can't do that, but I think this whole idea of, we've been getting, so people have been selling snakes for years. It's it's not necessarily a, an AI thing, solely, uniquely an AI thing. It is just aping human behavior. So I don't think that means we are going to, we are going to get to the apocalypse, but.

 

Let's take you back to the story itself before. Let's get away from the end of the world. Yeah. Hopefully the end of the world's not Yeah. Ho certainly not for the next 50 minutes.

 

[00:09:35] Nick Roome: We could even start with the placebo. You just brought up placebo or snake oil.

 

Placebo snake oil, I think going side by side. I think there's this placebo effect that's happening here. And this is what the study is suggesting is that there's this enhanced th this feeling that you are somehow empowered more than you would be when using other tools that with this human augmentation.

 

And to be clear this large blanket of human augmentation that we're talking about is not just ai. This is something as simple as a prosthetic. That helps you, helps give back a part of your life. If you were to lose a part of your body and have a replacement prosthetic, it could even happen with that.

 

It could happen with some sort of implant that I think a while on the go a while ago on the show, we had Brian who was talking about having that person's eye implant turned off on them. Yeah. And even something like that where you might engage in riskier decisions because you have a visual implant or even a cochlear implant that augments you in some way that now with a cochlear implant, you can hear cars as you're about to cross the road.

 

And. Maybe it doesn't have the same level of accu accuracy as I don't know. Maybe you, I don't know. There's use cases here, but there, there's plenty of examples that we can go into. I just wanna talk about they they're using this placebo with respect to human augmentation.

 

It goes beyond ai.

 

[00:11:05] Barry Kirby: Yeah. I think it's interesting cause what they're suggesting really is that The really what, whenever we develop a new technology, we need to do a, to do placebo based testing to truly understand what the the effect is that it's given you the positive effect the technology has given you.

 

And that's, I don't think that's necessarily a bad thing. I don't, I'm quite often pro-technology and no, we don't need all the X, Y, Z and this, but this I think has got quite a lot of merit in terms of, we've always been unable or it's been difficult to quantify the advantages you are, whatever technology it is you are using and actually using a placebo effect.

 

To be able to quantify that, I think is interesting. Just to look at technology use, but the whole to then focus in on risk taking. Risk-taking does take, I guess many forms. It, there are different types of risks that we have in life. An example that we talked about briefly in, in the pre-show is it, do you know, as you said, it does need to be ai.

 

We take risks now because we are using satellite navigation software. Be that online or rather that you, you would normally turn around and say this journey of however many miles takes me 60, 70 minutes to do, but actually my sat nav says I can do that in 50 minutes. And so I will time my, my arrival, say 55 minutes away.

 

Because that will get me there just in time. It also means I can fit in a loop, break on the way, et cetera, et cetera. So that is risky decision making. Cause all it takes is one bad traffic accident, one bit of congestion or slightly bad weather. And you will be late for whatever it is that you're, that you are doing.

 

And that is you having a higher a greater risk appetite because you think you know more. And I think that's what it comes down to is you think that you've got greater ability. You think you've got more information, more knowledge therefore you can make a a better decision. And so therefore you'll make a tighter decision, a risky decision.

 

Which is really interesting. And I think it's true. I think there's I think we do it well. You said it straight away in your comments of being Yes, guilty all the time. And we do that with all of that. The fact that we drive cars to get from A to B or to get taxi or whatever, rather than walking, it's because we're using technology cause we want to get somewhere faster.

 

Yeah. And we will make assumptions of that risky. Yeah.

 

[00:13:23] Nick Roome: Yeah, even in your like maps example you know that actually does give you more information. It gives you a probability window for when you'll arrive based on the time that you leave based on normal conditions. This is how this is when you're gonna get to that place, and that does give you some level of knowledge.

 

And in some ways that does lower that tolerance for risk when you have that piece of information. And I just think it's fascinating because this is a measured effect that is based on placebo, right? They basically put, a fake BCI on them and said, this will help you make decisions simplifying here.

 

But it's,

 

[00:14:03] Barry Kirby: that's a really interesting thing, isn't it? Because if they did that to you or I, we'd be like, how, show me how that's going work, because that cannot work. How you put your mag is it, has it got magnet, it, what, what's going on? Et cetera, et cetera. It was in, so what the researchers found, the individuals with high expectation of the technology.

 

So the te the people who believe this technology would work, would engage in risky decision making. I don't know if you are more knowledgeable about technology does that cuz if you believe that was a true brain computer interface, you'd have been like, that's not the way it works.

 

That, that's just not it. Alright. Have enough knowledge to go We ain't got that. Yeah. That that just cannot happen. So we've got a higher knowledge base to work from. However, if you just like willfully Yep. That's gonna work. Possibly the same person who, same type people who think that AI's gonna cause the end of the world.

 

Then maybe. Does that lead something else to their character? I don't know. Is weirdly, we were talking about Dunning Krueger is that Dunning Kruegger effect in action? So you've actually got, you've you know how, cause everybody uses their iPhone now or their Android, what other phones are available?

 

You use your phone now without really thinking about technology, about things like voice recognition. We use voice recognition now without even thinking about all that sort of stuff about how it works and the nuances of accents and things like that. And it, I was given a presentation a couple of weeks ago about, my, my career and so from one of the early things I worked on was early voice recognition.

 

It was only just at the, where software voice recognition was just starting to come about and I was doing some evaluation versus hardware and. Strong accents. You had to do loads of training for them, et cetera, et cetera, et cetera. Had to do loads of software. It basically was, it was a heavy software learning as well.

 

Whereas now we just completely take for granted we don't even think, or that the that the voice of recognition will work. We might have to speak a bit clearly every now and again, but in the large scale it's going to work. So our expectations of these technologies yeah, it that, that's a really interesting piece, dude.

 

We don't necessarily understand how it works, therefore we should assume it's gonna happen.

 

[00:16:14] Nick Roome: Sorry the whole voice to, to the voice, to text tech actually failed us last week on the show. And I won't say exactly what happened, but oh, yes.

 

[00:16:26] Barry Kirby: I saw that was amusing.

 

[00:16:28] Nick Roome: Let's just say, I said this is human factors cast really fast, and it came up with a bodily fluid.

 

But anyway, had to change the transcript on that one. I don't know if I caught it everywhere. But anyway, that's, that, that is a good instance though cuz like a lot of the time I just take that transcript and throw it in there. I say, oh, good enough. I think the people listening to it or reading it the vi the hearing impaired who are reading it they'll get the picture and, that's not the greatest.

 

But for a small team that we have it's the best we can do. And at least we provide some accessibility there. I think there's a lot of interesting I spoke about hearing impaired just briefly there, but here's another example. I brought up cochlear implants earlier, but what about hearing aids as like an augmentation tool that's not like full hearing replacement, but it's supposed to help you hear a little better.

 

Imagine you have one of these devices in your ear and you think it significantly improves your hearing, and so you start to maybe engage in riskier behaviors, like going out into more noisy environments like traffic. Or social settings where perhaps there's a lot going on.

 

It might be information overload with that hearing aid in, and it doesn't actually help you parse through any of that information. It just amplifies it and doesn't do a whole lot of good. There's examples that exist today like that. And like you said, with the maps even. Even with exoskeletons starting to book more commonplace in like big box home improvement stores or even on, on construction sites or manufacturing facilities for large aircraft, right?

 

There's the exoskeletons are starting to become more commonplace in these types of environments. And when you have, like you said the person who does this type of thing at work and then comes home and says, yeah, I can do that, and then they misjudge what they're capable of doing because they're so used to having that suit on.

 

But it's even it's a little bit different from that because if you think about that example, it's almost trying to translate their knowledge about what the exoskeleton is capable of doing. They might. Like to me a clear example of this might be like, okay, if this is improving my ability, I might be able to work longer to do this.

 

And that's a riskier option, is to work longer hours or to lift something slightly heavier than maybe the exoskeleton is approved for or something along those lines. That might be a good example of something on a in any of those environments that I just described. I don't know. I think the I, I brought up policy earlier and I think that has a lot to do with the way in which we think about this and that's more geared towards the AI side of things.

 

So we can hold off on talking about that if you had any other places that you want to go. Cuz once we go into the AI bubble, I think we're gonna stay there. I think that,

 

[00:19:07] Barry Kirby: yeah, no we invariably do, the one thing I did wanna bring up was around the methodology itself. Because this Columbia card task that they use to measure decision making risk behavior, I'd never heard of it before.

 

And so I was quite interested to dive into that. And basically it's a you got 32 cards face down in front of you and you can turn the cards over and you, it's a basically, do you think you want to get a good card or a bad card and you get certain points that makes it go either way. And so you get good card, you gain a point.

 

If you open up, if you get a a bad card, you lose loads of points and therefore lose around. And you keep on going. And they use that to evaluate your appetite for risk. So that was quite cool. And that is probably a, that sounds or looks like a tool I might use later on. And put that into mind.

 

Little deck of tools I want to try because I dunno whether you do this, but I have tools that I tools and methods that are my go-to, almost my go-to toolbox. But in the back of that toolbox, I have a bunch of stuff in there that actually, this sounds quite cool, I want to give it a shot, should the opportunity come up.

 

And I have a few of them that it's like, ah-ha, this is, and I had I was writing a proposal the other day that had exactly that and I was like, ah, I get an opportunity here to play this card effectively and put this in the proposal. And unfortunately I didn't win the we didn't win the work.

 

But this looks like one of them sort of tools that, that could be quite interesting. But, and also easy to, or relatively easy to use that you could do with little training, therefore something you could pull out for that type of thing. So I thought it was just worth highlighting that as a as a methodology that might be useful to, to have a go at.

 

[00:20:41] Nick Roome: Yeah, I think that's a good call out Columbia card task established tool for measuring decision making risk taking. So yeah, good point. I think I almost think about this problem. I think about this as a problem where you could it's like how do you solve the snake oil problem?

 

And it's a digital snake oil problem in a lot of ways now because there's AI for everything. And I think that's where the article is hinting at, that these AI technologies are going to promise us solutions for some of these things that we're looking to solve, and it's not necessarily going to work the way that we intended it to, and therefore we as a society, Will engage in, in riskier decision making.

 

And I think, there's this is a very like, cyberpunk theme, but there's this larger discussion around what does it mean to be human? And I, we've had these types of conversations before. I think even in our in our round table with Frank a couple weeks back we had talked about augmentation and who's, how, what makes you human in some ways.

 

Yeah. And so is augmenting yourself with a phone or anything, I know we've had this conversation before. Is augmenting yourself with a phone or any of these tools human or is that a fundamental human trait is to use tools to become better? And I think I don't know, like just thinking about.

 

Sort of what this means for the impact of technology on us as a society to, to make good decisions. It worries me a little bit in some ways when you have

 

When you have lawmakers decision makers using some of these tools and they don't understand what's going on behind the scenes.

 

[00:22:27] Barry Kirby: The sort of yes and no. I guess what is the impact of decision making and risk taking? Cuz we all, even without technology, we all take risks by getting outta bed in the morning.

 

That's a risk crossing the road. That's a risk going to work. That's a risk. There are risk, there's risk. Across everything that we do. Some of it we just implicitly without even thinking about it, we, our brain, our body it in does a its own type of risk assessment. And we all have different risk appetites.

 

So is this just bringing some people who would maybe not be quite so risky, bringing them up a level? Or would other people who are truly risky make them worse?

 

[00:23:11] Nick Roome: Okay. Let me, let me rephrase right. I guess the issue here that I think needs solving is when you have cases where taking risks and decision making are mission critical.

 

Okay. Let me put it that way. Yeah. And I tend to put lawmakers and politicians in that bucket. So let me just bring a couple examples here and I'll bring up one in healthcare. Okay. Imagine you have a patient using some sort of AI powered monitoring device or some sort of service that monitors your data, right?

 

And it promises to enhance their health by tipping you off to some things that are happening, like irregular heartbeats or, irregular pulses or, that tech te technology exists, right? So if these patients believe that these devices are significantly improving their health, are they going to engage in riskier health decisions like neglecting their checkups or ignoring minor symptoms that might actually be symptomatic of something greater?

 

Okay. Because they have these services that says, okay, your weight is this and your pulse has been this and your resting heart rate is this, and you've gotten this much activity and here's other data that we have on you. It's clear that you're healthy. Don't need to go to the doctor and do will they engage in that risky decision to not go to the doctor.

 

That's one example, but I can also think of other things where even at so that would be like mission critical for you as a person to survive. But I think there's even a. Like a livelihood, mission criticality when it comes to job performance. Okay. Can you imagine if employees with access to AI technologies, if they basically if they look at that productivity tool okay.

 

And then say, oh, my productivity is enhanced because I have this AI tool and I'm getting pretty close to the mark here as it impacts me with the podcast. Ok. My productivity is enhanced because of this tool. If they believe that these tools are significantly improving their productivity, they would their, there, there might be more and more tasks that are taken on to make the podcast better and vigor. And yes, the actual impact of the tools might be minimal. Like that's what I'm trying to say here. No,

 

[00:25:33] Barry Kirby: That's true. And I think the, but again, the. The one thing that we haven't explored is, type of risk. Cause as you're quite right, you say there is there's personal risk around, do you go and get advised you?

 

The other health example is if a medical professional is presented with information that says should you do an intervention or not? Would I be, am I more risk? Or am I less risk averse? More risk averse? Yeah, less risk averse. If the risk is, or if the consequence is actually fairly minimal.

 

So if I'm playing a game, the, is the Columbia Carta, for example chances are I would be more, I would engage, I'd be less risk averse because wait, again, there's no impact. If I am flying an airplane or I'm making a critical decision on a nuclear reactor or et cetera, et cetera, et cetera, my risk appetite is going to change and therefore does.

 

I wonder if from this study, whether that goes all the way through whether the Columbia car task outcomes do maps itself well onto riskier decisions or decisions with bigger potential outcomes or bigger potential negative

 

[00:26:42] Nick Roome: outcomes? Yeah, it'd be curious to, to put pilots. Yes. Okay.

 

We're gonna strap you into this bci you're flying the airplane and Nick's gonna enhance your ability to fly and land this airplane. Don't

 

[00:26:54] Barry Kirby: worry. It's fine. But we've done the ethics, we've done it. It's fine. That is a, it's fair criticism of the study, isn't it? Just to quickly run some, run through some of them, it was a fairly small sample.

 

You had 20 some participants. So your general the ability to generalize the findings is fairly minimal. But it's still interesting and that's why I quite like some of these studies. It just throws stuff out there for you to to play with. And he didn't consider. The actual individual differences between people on their personality traits or any other individual differences that they had.

 

So worth highlighting that. But still, I think we've come, we do have come to a quite strong conclusion that that we do think it's quite an interesting study for the research is required. Of course. Yeah. Further the research is required and I'm sure we could carry that out.

 

And we've also found out how much you've expanded your your repertoire tasks just because the AI is made. I've gone the other way, but because the AI is available to, to do it, it means I can get stuff done quicker. But I haven't really expanded what I do except I have taken on another podcast, but apparent.

 

But that, but we just won't talk about that. So you've doubled your work. You've doubled your work. Yeah. That's just ruined my argument, so I'll just take that back. But but I did it knowing that the tools were there and I haven't, I don't use them to the same extent. In fact, do I use AI at all for that second one?

 

I don't think, I don't think I use it at all in impulse whatsoever. Cause we don't do, cause I don't do shorts for it yet. Do show notes? Nope. We freestyle. Oh, I, yes, we do a vague skeleton, so we write down the dates. But that's it. It's all ju it's pure Ben and Barry.

 

Goodness.

 

[00:28:28] Nick Roome: Yeah, I still think there's a couple other examples out there that we can talk about. I think you bring up some good criticisms about the study. And I think to lead, I guess into some of the solution spaces. I brought up policy earlier. I'm not gonna talk about that a lot, but that AI Bill of Rights or just some way consumer protection.

 

Is huge. And so being able to advocate for the consumers, being a able to advocate for the end users at a policy level, I think is gonna be huge when it comes to these decision making because there's laws around pilots and, there's in, in mission critical decision making roles, there are laws.

 

And when I think this starts to bleed over and the importance of policy starts becoming more relevant is when the outcomes of those risky behaviors then start to impact not only yourself but society negatively, then I think there's going to be a larger call for it. And I think we might. Be there with some of this tech.

 

I don't know that's a really cynical look. But

 

[00:29:38] Barry Kirby: I, in previous times when we've talked about this, I've been very much against doing a policy-driven approach. Cause I think it would restrict how we could use this technology, however, with some of my more recent explorations into the different type of people who are using this stuff.

 

And some of the examples that we said where people don't necessarily understand that it's either still in research, it's in development, that it might not be actually given you the truth that it's based on a statistical model, et cetera, et cetera. And then it's not actually a person hiding in your phone.

 

There's an element here that people need to be save from themselves, which is why we produce policy. And I'm now coming to the conclusion That I might have to agree with you that yes, I think there is a strong need for policy, not least of which, which was an adaption of. You remember the last the conference that we live streamed h from H F E S last year where we had professor Paul Samon and we interviewed him and he was talking about artificial general intelligence.

 

And the thing that sort of chimed together with me fairly recently was a. This idea about we needing to think about the policy and how we do that, but b was a, was the example that he give us. The, when the general, when the artificial intelligence realizes that if we realize how intelligent it is, we could switch it off that it pretends to be dumb.

 

And therefore could it also do the same with our policy? If we are using, if we developing policy, chances are policy makers will be using Chan g p t to help de help develop some of their stuff. Yeah. Therefore if we start using AI to help craft some of our policy, will the AI be able to look at that and go, hold on a second talking, wait, you're talking about me.

 

I'm not crafting that sort of policy and therefore be able to do it. And yeah. We might have stray into I think maybe a couple years ago we'd, we said that string into the ridiculous, but. Is it,

 

[00:31:32] Nick Roome: I think it you're right. I also wonder too, this just came up I'm wondering too, like this was one and done task in, like they put this thing on and tried to, and they were engaging in more risky behavior.

 

And so I'm wondering like, what are some of the longer term effects? If, like I I've used chat G p T now and I know what its output is and I know what to use it for and what not to use it for. Will that understanding come over time for most people, or is it going to be one of those things where you have to have a certain level of knowledge about the underlying systems or the way it works before you start to understand the limitations?

 

Or, aligning your expectations with the limitations of whatever tool that you're using, right? So I'm wondering if long-term effects go away or are mitigated by learning through use, right? If if, let's say in this Columbia card task, right? Let's say they, they engage in more risky behavior and then ultimately they realized that they were engaging in the siski behavior and so they tailed that back a little bit.

 

It still doesn't offset the initial risky behavior, but does it normalize over time? And that's another question, right?

 

[00:32:42] Barry Kirby: And we've done exactly that with, we'll go back to chat g p t as the example. The more, because of the interface that chat G P T has, it's literally just a it's a chatbot chat interface and it's taken us a while to work out.

 

And it's taken anybody who picks it up a while to work out what the capability is, which you do by trial and error, say, can you do this? Yes, I can. Here you go. That's not quite what I wanted. Can you rephrase it to do x, Y, Z? Can you gimme some input peer? And then as people experiment with it, they find it can do different things.

 

And so you build up your own mental model of what it can actually do. The kind of things that I'm beginning to do are different to the kind of things that you've beginning to do, which is very different to some of the things that the in the Facebook group groups I'm members of. Yeah. They've been trying to do very different things.

 

And then wondering why it wouldn't work. Yeah, I think it's, it, you've, you almost build up your own use case for it to certain extent. And then it's only through discussion thing. You've shared with me some of the things that you've done with it. I've shared with you some things I've done with it, and you're like, oh, can he, oh, will he do that?

 

And and you play around with it and go, oh, cri, yeah, I didn't realize we could go down that route. So how can we the, how can we make some of that happen? I think it'll be it'll be interesting to see how some of this is how it evolves and as you say how we train for it. Because again, human nature is, we find shortcuts.

 

It's a whole premise around human error, isn't it? And all that sort of thing that actually. In this type of thing. We will try and make our lives easier for ourselves and with good intentions, well-meaning, but we'll try and find time, short time to do the job. We'll find AI's ways of doing that in ways that we, that weren't intended.

 

So yeah, we'll see where we go with that. Yeah.

 

[00:34:22] Nick Roome: All right. Final thoughts here, Barry, is the tech placebos AI technology hype help? Which one is it?

 

[00:34:31] Barry Kirby: Oh, it's definitely, I think it's the right, it's the right argument. A further research is required. Interesting case. So well done. Send that one.

 

[00:34:38] Nick Roome: All right. Yeah, for me, I think lots of interesting examples that we could look to. I think we, we talked a little bit about healthcare. We talked a little bit about construction, but I think some of the more interesting things are around education. We didn't quite get to those talks, and that's especially relevant as you start getting to that misinformation piece that we talked about earlier.

 

I don't know. I think this is a good story. I think this is a good conversation. I'm excited to see. Follow up to this longitudinal to see, what do these behaviors normalize over time? And I think that'll round out our discussion. So thank you to everyone this week and especially our patrons for selecting our topic.

 

And thank you to our friends over at Alto University for our news story this week. If you wanna follow along, we do post the links to all the articles on our weekly roundups in our blog. You can also join us on Discord for more discussion on these stories and much more. We're gonna take a quick break and we'll be back to see what's going on in the Human Factors community right after this.

 

Yes, huge thank you as always. To our patrons, we especially wanna thank our human factors, cast all access patrons Michelle Tripp and Neil Gainy. Really appreciate all the support you guys give the show. And today we'd like to talk to you a little bit about human factors Minute. You heard it there in the little advertisement for the show.

 

But why listen to a polished advertisement when you can listen to us, read a dumb thing here live. Hey there all you audio aficionados, podcast people, and earbud. Enthusiasts.

 

Yeah, that was a bit of a stretch. Anyway, are you amped to chat about human factors? I bet you are. And we have just the ticket, or should I say just the podcast? Now. Now don't click. Don't click away. Don't go away. I hear you saying, but hold on. I've already spent countless hours diving into the world of human factors.

 

I can't spare another minute. My fellow factoid freaks. I promise. This is a minute well spent. We have 181 episodes in the bag. That's almost four hours of pure, unadulterated human factor. Fun. And the best bit. Each episode is shorter than it's quicker than popping a bag of popcorn. So if you're a fan of fun, size facts, you can tune in while you're stuck in that morning Coffee.

 

Q. Why wait? Join us on Patreon for the complete collection of human factors. Minute. Did I oversell it? Okay, folks, let's get learning and laughing.

 

That bit's really long, Barry. And remember, it's all about having a hoot. Getting smart. Okay. I'm always embarrassed with these. These are dumb. I would start doing these.

 

It came from.

 

Yes. All right. Switching gears, getting into it came from, this is the part of the show where we search all over the internet to bring you topics the community is talking about. Find any of these answers useful. Give us a like to help other people find this content. All right. The first one up tonight here is from the UX Research subreddit.

 

This is by user choice ADD 9 6 8 is product management, cannibalizing UX research. I'm a job seeker in Europe. I have noticed a trend where product management roles are taking on more user-focused responsibilities in small and medium companies. Is this trend cannibalizing UX research jobs? What is your opinion on the shift in job responsibilities?

 

Barry, what do you think?

 

[00:37:48] Barry Kirby: So possibly an unpopular opinion, but I think it's a good thing. When you look at what the product manager is, particularly in the the, like the product donor role in, in Agile as well. They should be engaging with our human factors, Z type user research type stuff because their job should be to be advocating for the user to make the product the best it can be.

 

So I see it as part of their job role. But a part of it is also learning about their limits and therefore, where can we pick up where do they need to call in the experts? What can they do themselves? As, especially in small jobs as sorry, in small companies and micros you've generally got people who take on many roles.

 

I run my own small company. I do every, I've had to learn how to become hr, how to become finance. And you don't see anybody else moaning about me picking up them sort of jobs. In, in evening human factors role. I do pick up other roles. I manage projects. I'm project manager. I'm program manager.

 

Could be Project finance as well, we, if I'm having to pick up that bit. So we don't really complain about it the other way around. We flexible we, it just the value that it is. So that's a very long way around saying I think it's true. I think it is happening, but I don't think necessarily think it's a bad thing.

 

[00:38:57] Nick Roome: Yeah. I see a lot of UX research go into product management roles and I think this is generally seen it's being more and more seen as a move by a lot of UX researchers and I think I think there's a few things that happen when they assume those roles. I think one, you start to see this trend where PMs now suddenly care about the end user more, and that's because you have some of these research roles moving into product management roles.

 

And I think what's happening, at least in, in that perspective is they're still, they still advocate for the user. But I think what's happening is that in product management roles, they might not have the full gamut of tools that the UX research team might have or the UX research role might have. And so they're ad they're advocating for the user, but do they have that full dataset?

 

Do they have the full picture of what's actually happening? I'm speaking about this from more of a larger company like tech company type structure where they might have preconceived notions and they're acting on those as if those are well researched facts about the end user. And so it c it can be dangerous in that sense.

 

Now, that's not to say that years and years of practice and understanding conventions and standards are not going to go into the development of a product, but I'm saying that. There might be some shortcuts being taken without having the full access to research toolkits that is important for some decisions.

 

So I think the other thing that happens with this is that the role shifts from being user focused to being product focused when you're a product manager and in that role. And so you have a different mindset and it's inherently you think about the user in a different way in relation to your product.

 

And I think this is different for smaller companies and mid-tier companies. And it's interesting from that perspective too, because everybody wears a lot of hats and even startups, right? You see this. And ultimately, does it matter if it's segmented? I don't think so. I think what matters is that the the end.

 

Product keeps the user's needs in check, and if that's a product manager doing it, okay. If that's a user research doing it, okay. It, it just ultimately, as long as it, as the, as long as the user's being taken care of. Okay, let's get into this next one here. This one's also on the UX research.

 

I read it by AGI 2 37 dysfunctional project. Do I just take the money for now as a contractor? Should I continue working on a dysfunctional project and take the money even though I'm not enjoying it or seeing any re improvement, or should I look for something

 

[00:41:46] Barry Kirby: else?

 

Protect the money. Oh actually that is my first comment. Do you need the money? If they're paying and you're not having to do very much and is that enough for you by the fact that you're writing the question you I would guess not. Otherwise you wouldn't be questioning it.

 

I would caution that the perfect job doesn't exist. The one that is, that completely employs human factors and all that sort of stuff in the right way and the, in the way that we would like them to from start all the way through the project. I've never seen one yet. And actually for me that's half the firm is fixing some of this stuff.

 

We have a very simple motto in my business and we evaluate every single job by this. And we don't just do it at the beginning, we do it throughout the project. Cause if one of these, if it stops work at any point, then we try and it's, we need to fix it. And that is around having, each project, whenever we saying do we need to bid for something or want to work for something, is it interesting work?

 

Has it got nice people and does it pay money? It has to have all three of them elements in it for us to actually go and do the job. The money bit. Sometimes it doesn't have to be much, it as long as it pays the bills, but it's gotta be interesting work and you've gotta enjoy what you're doing. And it's gotta, you've gotta be working with nice people without them things there, then you don't wanna get up in the morning, therefore, it's not life fulfilling.

 

So given that they're asking the question, I would say it is time to move on. It is time to go and find something else because you won't be asking the question otherwise. However, if that does take time, effort, and money to do that, so make sure you, if in this contractor role, because everybody thinks that contractors get paid, shed loads of money all of the time, despite some of the realities of life.

 

Get the make sure that you can afford to take time out to find the next job that you want to do. So that's yeah, get outta there. What, Nick,

 

[00:43:30] Nick Roome: what do you think? Yeah, I think there's an interesting calculation here. What's the cost of happiness to you? Because, and is that, can you find another job that.

 

Pays you. The difference between the cost of, is the difference in this job between another job, the cost of happiness, and how much does happiness mean to you? How much does job fulfillment mean to you? Quantify it, put a number to it. That's an interesting thing an interesting approach that you can take to this.

 

Because if it's worth it to start the process of trying to find another job, trying to find more work, is that worth it to you to go through all that and get paid less? At the end of the day I will matter depending on the person, but I'm just saying do that because if you can put a number to it, then you can quantify your happiness and at least within a role.

 

Is this fulfilling to you? Is this enhancing your skills and what is the cost of enhancing your skills over time? I think there are some ways to get around this like rut that I would call this person in. Projects change over time and so there's a very. I wouldn't say strong possibility, but there's a possibility that the project changes and people become more receptive.

 

I think some of the issues that they have is that the research isn't being done quite as frequently, and so things might change. You might get more, and there the requirements might change over time. So even though it's not great now, it might be great later. You never know. And then the last piece of advice I will say is that if you're not feeling fulfilled, but it does pay the bills and it's what you need versus getting paid less.

 

I think there's other ways to stretch your. Passions and to get involved with a passion project, either volunteer, do something like that, or or, build a passion project and work on a set of skills that can help you at least feel good about some of the stuff that you're doing. So there's a couple approaches there.

 

I think this is not a great situation to be in. I feel for anybody in this situation. That kind of sucks when you're making so much money that you don't wanna leave because you're unhappy. The that's some advice. Take it or leave it. All right. This last one here also from the UX research subreddit by tool thru.

 

Has anyone done pro bono work while job searching? What was your experience? They write, I'm job searching and feeling anxious about not being able to use my skills. Can somebody share their experience doing pro bono work while job searching? I miss doing work. I wanna build my portfolio, but I don't know where to start.

 

Barry.

 

[00:46:01] Barry Kirby: So on the one hand you can say that you've got less spare time. And yeah, the pro bono work is useful to build experience or build, build new skills or keep your skills going. But it does, when you're doing that sort of stuff, it takes your time away from actually job searching job hunting, doing them skills.

 

Cause actually that's your job at the moment, is to get a paying job. A pro bono in itself is different from you doing it. Your own pet project. Your own pet project. You can pick up, put down, you're the boss of it, that's fine. But when you're doing pro bono, just because you're giving it for free doesn't mean that the standards go away.

 

Doesn't mean that the requirement goes away. You still have to see it through, because you will use that as part of your portfolio. You are probably gonna want to reference for them or if it all goes badly, that will still reflect badly against you. Whether you've done it for free or not, it doesn't matter.

 

So you've still got standards and you still got your own personal pride. So just be careful with pro bono work. It's not and I see that slightly different pro bono work is again, slightly different from just pure volunteering work as well. So it is it's, it can be a double-edged sword.

 

So I would be focusing your time, getting your job searching, right? If you're having spent long time job searching, there is something going wrong with your job application methodology. But what you're putting into your cv, how you are doing that sort of stuff, rather than what is in your portfolio, I would suggest.

 

Nick, what do you

 

[00:47:24] Nick Roome: think? Pro bono work is hard. And it's, I think what ultimately comes down is this is this, what are the requirements of the project? Is this something that you have signed a contract saying that you will do? Or is this just something that you've approached a company and say, Hey, I'd love to do this thing for you, because there's a difference in the way that you approach those types of projects.

 

I think for a company, it's less risky for them to say, okay, yeah. Go ahead and do this thing. If you're gonna do it for free, go ahead and do this thing. And then when you come back to them, they can either take that feedback or not. Now, do you really have a portfolio piece? If they haven't implemented that feedback or if they haven't implemented that project?

 

I would argue yes, you do, but it's not as strong as a case as if they actually take that, run with it, put it into their, let's say you're doing a website for a local a local coffee shop or something, right? You could give them that website. Are they gonna take it? Use it? Maybe. If they do, you have a stronger case for a portfolio piece.

 

If they don't use it, then you have a less strong case. And it just looks like you've done this as. Like a school project or something. And so it's hard. And, but on the other side, if you sign a contract for something, then you're absolutely right, Barry, you're taking away time that you would be elsewhere else, searching for a job.

 

I don't know, it's hard. I would say maybe find a passion project instead. That you can use to build skills and feel like you are you're not atrophying. So that's it. All right. Let's get into this last part of the show. We just simply call one more thing. Barry, what's your one more thing this week?

 

[00:49:07] Barry Kirby: So this week we've had nice weather. It's been really sunny. It's been really fact too sunny. I'll come back to that. But it's been really warm and it just, you forget through the winter months and through the early spring months when it's still wet, it's still rainy. It's still windy, just how uplifting getting outside and getting some bity getting into the sunshine is a, just such an uplifting thing.

 

So I thoroughly enjoyed, we had a bank holiday weekend where so where normally in England, in the uk when it. When we have a bank called it normally rains because everyone's off. And it didn't, it was beautiful. It was sunny. We had barbecue, we did all that. But then there is a, the underlying moral of this story of really sun, really hot sun.

 

When you are sat out in the backyard, maybe taking your t-shirt off to, whilst you're reading a book where sunscreen, because I didn't and I regret my life choices. I took a riskier option. Non-AI supported. I just forgot. And I now am doing a very good invitation of a lobster.

 

[00:50:08] Nick Roome: You can argue that sunscreen is human augmentation for protecting from radiation poisoning.

 

Which is what sunburns are,

 

[00:50:18] Barry Kirby: Which is true. And I didn't augment myself with my sunscreen and I now do regret my life choices.

 

[00:50:25] Nick Roome: Sometimes it goes the other way for me. I had a weird thing happen where I was losing a lot of drive and a lot of steam and a lot of motivation. I think some of it's probably coming back from vacation related even though that happened two, three weeks ago.

 

And and then I found my pen. I found a pen. I have a pen on my desk, and this is like neurodivergent thing, but like having a to-do list is just, it's turned me into a. Productivity machine. It's amazing that just having a pen and paper and if it's not in the right place, then it's it's gone.

 

I've lost it. It's never gonna happen. And then as soon as I find it, I, I start writing stuff down and then boom, I have something to cross off and actually go and do. And I don't know if you lost your drive, maybe find a pen and paper, put it on your desk. That's all. Alright, that's it for today, everyone.

 

If you like this episode, enjoy some of the discussion about human augmentation. I'll encourage you to go listen to episode 273 where we talk about how a third robotic arm might be the future of human augmentation. Comment wherever you're listening with what you think of the story this week. For more in-depth discussion, you can join us on our Discord community or join us on our pre or post shows during our Thursday live broadcast.

 

For more in-depth discussion like I said, Discord community. Our official website signed up for our newsletter. Stay up to date with all the latest human factors news. If you like what you hear, you wanna support the show, as I mentioned at the top of the show, there's a couple things you can do. One, stop what you're doing, leave us a five star review.

 

We love those two. You can always tell your friends about us. Let 'em know that these cool guys talk a whole 30 something minutes about human augmentation and AI and stuff. And then three, if you have the financial means to just a buck gets you in the door with our Patreon. We really appreciate all the support that comes our way cuz it goes right back into the production of this show.

 

As always links to all of our socials and our website or in the description of this episode. Mr. Barry Kirby, thank you for being on the show today. Where can our listeners go and find you if they want to talk about, I don't know what to do with chat g p t for less risky options.

 

[00:52:27] Barry Kirby: If you can find me all over social media, particularly on Twitter at bamus.

 

Okay. Or if you wanna also listen to interesting interviews with people in and around the Human Factors community. You can find me on 1202 the Human Factors Podcast, which is 1202 podcast.com.

 

[00:52:40] Nick Roome: As for me, I've been your host, Nick Rome. You can find me on our Discord server and across social media at Nick underscore Rome.

 

Thanks again for tuning into Human Factors Cast. Until next time, it is.

 

 

Barry KirbyProfile Photo

Barry Kirby

Managing Director

A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.