Dec. 1, 2025

E310 - Congratulations! You are now responsible for whatever AI just did

E310  - Congratulations! You are now responsible for whatever AI just did

This week on Human Factors Cast, Nick Roome and Barry Kirby dive into how AI is reshaping human factors and UX roles. First, we discuss seven emerging jobs due to AI, such as AI decision auditor and transparency designer. We debate which roles are actually new versus functions human factors professionals already perform. Then, we explore the implications of AI in creative fields and education. How should we adapt our teaching methods to integrate AI as a tool rather than a substitute? Plus, real-world examples of how AI changes jobs rather than eliminates them. If you're involved in human factors, UX, or system design, this is a must-listen for integrating AI into your work and the critical importance of AI governance and ethics.

Support us:


Human Factors Cast Socials:


Reference:


Feedback:


Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here!

Mentioned in this episode:

Listen to Human Factors Minute

Step into the world of Human Factors and UX with the Human Factors Minute podcast! Each episode is like a mini-crash course in all things related to the field, packed with valuable insights and information in just one minute. From organizations and conferences to theories, models, and tools, we've got you covered. Whether you're a practitioner, student or just a curious mind, this podcast is the perfect way to stay ahead of the curve and impress your colleagues with your knowledge. Tune in on the 10th, 20th, and last day of every month for a new and interesting tidbit related to Human Factors. Join us as we explore the field and discover how fun and engaging learning about Human Factors can be! https://www.humanfactorsminute.com https://feeds.captivate.fm/human-factors-minute/

1202 - The Human Factors Podcast

Listen here: https://www.1202podcast.com

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

E310 (Audio) - Congratulations! You are now responsible for whatever AI just did

===

[00:00:00]

Nick Roome: Hello everybody and welcome back to another episode of Human Factors Cast. This is episode 310. We're recording this episode live, November 28th, 2025. I'm your host, Nick Rome. Joined today by Mr. Barry Kirby. Hello there. The human factors guy, as you can see, I need to change. Can we change that? We need to change that back.

Nick Roome: Uh, we got a great show for you tonight. We're gonna be talking about the, seven new jobs that are coming with the advent of ai. We always talk about ai, taking people's jobs but this is kinda the flip side.

Nick Roome: I really have nothing else. Barry what's going on over 1202?

Barry Kirby: Not a huge amount really. Basically we, as we've mentioned before, um, we're taking a break for 2025, so we are looking forward to January 26th and we started curating guests. So I have a, um, a few lined up already, so we're gonna have to start.

Barry Kirby: We we're in two minds about whether we start recording some in December. But we wanna get going in January. But in the wider ergonomics field, and I think I mentioned last time, we are looking forward to 2027 [00:01:00] 'cause we'll just ignore 2026 completely because the IEA triennial is coming to London. That should be a, um, a really cool event. Already seen a lot of the planning happening for it, and we know tickets are gonna be going out fair, um, starting to go on sale fairly soon. So it'll be a really cool opportunity for anybody who wants to get involved in HFE stuff, um, and wants to come to London and perhaps meet up and do stuff.

Barry Kirby: Then we can start like making dates already. It'd be great. ~That would~

Nick Roome: be

Barry Kirby: awesome. ~Uh, I,~

Nick Roome: I hope to make it out there for that. That would be super cool. Uh, okay. Well, let's not bury the lead. ~Let's let, ~I think we should just get right into it. What do you say?

Barry Kirby: Let's crack 'em.

Nick Roome: I wouldn't say that's a new record, but that was pretty quick. Barry, ~what's, what's, uh, ~what's the new story for this week?

Barry Kirby: So the story this week is about the real AI crisis isn't job loss, it's governance. We are running out high risk, opaque systems without the human expertise needed to question, monitor and be accountable for them.

Barry Kirby: So using the metaphor of a cake [00:02:00] baked with salt instead of sugar. The author of tonight's story warns that you can't retrofit transparency, traceability, or human oversight after deployment, especially under regimes like the EU AI Act. And in a world where just 250 poisoned files can quickly compromise massive language models.

Barry Kirby: They lay out seven emerging ~custo ~custodian roles like AI decision auditors, human accountability architects and AI risk stewards that sit squarely in the human factors wheelhouse. People who understand social technical systems that can challenge AI reasoning and ~de ~design processes that preserve human responsibility for human prac practice practitioners.

Barry Kirby: This is basically a call to step into the cockpit of ai, to build the skills in governance, scenario thinking and data provenance so that when machine generates outputs, humans still own the outcomes. Nick, what are your thoughts on

Nick Roome: ~this? ~Oh, I have plenty of thoughts, but before I [00:03:00] do that, you mentioned a couple of the roles.

Nick Roome: Uh, I'm gonna bring up this graphic. Oh my God. We got this set up in the pre-show and then it just messed up entirely. Alright, so we're gonna bring up this graphic here to actually go through the roles here ~before we, ~before we start. So let's talk about these seven different positions and talk about ~what they, ~what the author says they are, and then I'll give my thoughts.

Nick Roome: Cool. Alright. So ~the, ~the first one that we have is the AI decision auditor. Uh, so this is somebody who questions and validates the AI before implementation. I'm just gonna read the baseline stuff. I think there's a lot here. The human Accountability architect is number two here, and they design processes that preserve human responsibility.

Nick Roome: Even as systems become more autonomous~ that ~you have the multimodal interaction designer who is orchestrating how multiple AI agents collaborate with human oversight. You have the AI risk steward that monitors systems for [00:04:00] drift degradation and bias emergence. You have the responsible AI implementation strategists, uh, translates between the technical teams who build and legal teams who interpret ensuring compliance is not theater, but embedded practice.

Nick Roome: You have number six here, the AI Drift and Integrity Analyst. They track models performance over time, identifying when systems produce unreliable results ~for ~before regulatory audits discover the problem. And then the seventh position here is the transparency and explainability designer ~trans. ~So they translate the AI decision making into understandable explanations.

Nick Roome: Okay. My thoughts one, he this is coming from a position of, ~seven, ~seven seems unrealistic. I'm just gonna say that right ~out, out, ~out the gate here. I think that a lot of these hats are hats that folks like you and I probably already wear.

Announcer: Mm-hmm.

Nick Roome: [00:05:00] Especially when it comes to designing the process to preserve human responsibility, to collaborate with human oversight, to do some of the risk analysis.

Nick Roome: I think we already sort of embody some of these roles. I disagree strongly with some of these positions. Like take, take position one for example. I don't ever see a world in which we have a dedicated role to somebody who is questioning and validating AI before implementation. That would be rolled into one of these other roles.

Nick Roome: I think we already do some of this in some of the roles, and I, I just, I don't see, unless you are, you are a company with a lot of resources, a lot of money that really wants to create these roles that are dedicated for these things. I just don't see it happening in like small businesses.

Nick Roome: I don't see it happening in medium sized businesses. Even. These are [00:06:00] hats that we wear. These are not. Jobs that are being created. ~I would go, it's hard to say, ~I don't think most of these will actually see the light of day as as like something you would see on LinkedIn as like, we're hiring for the AI risk steward.

Nick Roome: I don't see that. I see that as something that is part of a larger role. And these are hats that you wear within those roles. Yeah. That's my very cynical, very critical lens on this. Barry, what are your thoughts on this?

Barry Kirby: So I guess with, I mean ~we, ~we've already to what now? ~The, ~the fifth industrial revolution and one of the big things ~ar ~around that is jobs go, you know, as things happen, jobs disappear, but then other, more jobs are created.

Barry Kirby: So fundamentally I think this idea that we are going to see new or different types of roles ~and within ~because of ai, because of autonomy and automation, then yes, I think that's absolutely right. I think ~it ~new jobs are gonna be created or, ~or be ~probably best better still rather new roles. I think it's more [00:07:00] new functions, very much aligned to what you were just saying.

Barry Kirby: I think there's a lot of these things that are here that. Yes, ~they're, ~they're new things to be done. They're new tasks. They're new functions, but whether they just get they exist in their own right or they're just subsumed into other bits, and I've got some thoughts on them, rules that I'll come onto at the end.

Barry Kirby: But I've been talking about AI and hf particularly in the past, I would say six months. A lot of people have asked for presentations and things on the use of AI in the human factors role and about how we using it. So this is almost really topical. ~It's, ~it's a really good thing around and bringing on more comment to more discussion because we need it.

Barry Kirby: We need to really understand what we as HR practitioners do in this role. And it's not just good enough to say that with a traditional HF education, you can come and just go and straight do all of this. We need to include, I think, AI and autonomy into the square boundaries ~of an, ~of an HF practitioner.

Barry Kirby: It shouldn't it's getting [00:08:00] to the point where it cannot be a specialism anymore. It's got to be a, it's gotta be part and parcel of what we do in the same way, um, anthropometry is in the same way, cognitive cognitive understanding is. So we need to bring all of that in into there. But as we see some of these, these roles, I mean large language models, we are now seeing like prompt engineering is a discipline which didn't exist, what, 18 months ago, two years ago.

Barry Kirby: Whereas now we're getting a lot more drive and guidance around prompt engineering. So we are generating these new roles. For me, one of the big things in all of this is around how we test ai. We've, and this is certainly something that, again, I don't think it's a new role, but it's certainly a new function.

Barry Kirby: ~The, ~so how do we make sure that the output is what we expect it to be, because AI has given us a whole new view now on, on out. 'cause with automation, um, if you give it a, give some, give a system a defined set of inputs, it would always give you the same output. We are not [00:09:00] gonna get that with ai.

Barry Kirby: ~We, ~we are gonna have a variety of outputs that is still right. And so that's gonna change how we do systems engineering. But I guess I sort of had a bit of a doodle with them seven roles to work out. What are my role, what, what, where do I think they fit? So you mentioned the AI decision auditor.

Barry Kirby: Well, isn't that just the assurance role? So human practice integration is all about assurance. And it kind of just fits into that really well. And I would then also link that to that number six, which was the, uh, AI drift and Integrity Analyst. Well, again, that's just assurance. So I think them two things are still a part of role.

Barry Kirby: We already have the Human accountability Architect. Now, again, that's ethics. So fundamentally it's part, you could say it's part of the~ the in ~integration role. ~It, ~I could see a separate function of ethical architect, um, because it's something we haven't really had to deal with up to now. ~We, we always, ~we've always sort of said about that human in the [00:10:00] loop piece should always be there.

Barry Kirby: And that's policy that policy's getting more and more challenged as technology moves on. So I could see a potential new role, not just for AI, but in the wider space around ethical architectures. And so that also brings in responsible AI implementation. Things like multi-model interaction, designers, transparency and explainability.

Barry Kirby: That's just UI design, that's just HCI design. I think. ~Um, ~and AI risk is just risk engineering. So out of all of them, I see maybe one new role, but I don't think it's probably a new role. I think it, there's something probably very aligned to that already that, um, that we'll probably take that in, but certainly new functions that will form big parts of roles.

Barry Kirby: So, yeah. ~I, I, ~I think it's interesting. I think it's

Nick Roome: interesting times you know, you, you made a point about how jobs come and go and there's, this is if, if there's not a better example of why this is so, such a revolutionary time for the way that we think [00:11:00] about work. ~The, ~you brought up the example of prompt engineer that's on its way out already.

Nick Roome: Yeah. Because the models are now accommodating for the fact that not everybody is a prompt engineer. And you can still do prompt engineering, but the models are getting better and better understanding user intent because ~they, ~they, 5.1. Right. Chad, this is a snapshot in time, by the way, if you're watching this like three years in the future, I am so curious how things have changed since November 28th, 2025.

Nick Roome: ~I think there there's, you know, ~with prompt engineering right now, the models are pushing back and saying, well, did you actually mean this or that? Uh, so I don't know if you've gone on to like ~five ~chat GPT 5.1 lately, right? Yeah. That's the current model that we're on. If you ask it something, it will actually come back and ask for clarity.

Nick Roome: It will do deeper thinking to actually reason out what it is that the user is trying to do. And so those models are actually becoming better about understanding user intent. And so, you know, even that, that role that you said, prompt engineer that's coming and going already, you know, and, and AI's been [00:12:00] mainstream for a couple years now at this point.

Nick Roome: And so it's like, I'm wondering how much of these problems will go away in the next few years as we start to think about these? I don't think ethics will ever go away. I think ethics is always going to be a discussion, especially around who's responsible for a decision making thing when there are suggestions by some of these ai, ~you, ~these ai models.

Nick Roome: ~And, ~and when we talk about ai, we're not just talking about LLMs. Those are the quickest ones to point to. But I think, you know, keep in mind that there are other AI models ~that, ~that specialize in things that are not LLMs. Right. So let's take, ~um, ~the example of, ~uh, ~being able to detect cancer in, ~uh, ~imaging, right?

Nick Roome: Yeah. There are specific models that will look ~at. ~At imaging to detect whether or not cancer is present. There's always, you know, a doctor review to make sure that it's there, but this'll flag things that maybe were not previously seen. It'll hopefully not flag too many false positives, [00:13:00] but ~that is a, ~that is another type of model that we could apply these things to.

Nick Roome: And I think ~that's just, ~that's important ~to, ~to remind ourselves of, to ground ourselves. We're not just talking about LLMs here. ~I think you're~

Barry Kirby: Yeah, go ahead. Absolutely. Right. I mean, when you look at the full breadth and depth of what AI is you know, you've got things like neural networks, et cetera, et cetera.

Barry Kirby: You know, there, there's, there's lots of other different examples of where this comes in. And, but to get this right now, I think ~the ~is quite key because we are gonna be having ~artificial ~artificial general intelligence coming around. We're not there yet. You know, ~we've, we've, ~we've done interviews before, a discussion before where we mentioned the work by Professor Paul Salmon ~and, ~and them sort of people ~where they, ~where they look at this and what the risk of AGIS could be.

Barry Kirby: But if we get these sort of, not necessarily roles, but ~the, the, ~the square functions. So ~the su ~where we need to be ~su su ~suitably qualified and experienced in AI within the HF roles. We need to start thinking about this now in order to make this you know, just business as usual. So I think you're right, ~ethic, ~ethics is [00:14:00] always gonna be there.

Barry Kirby: Assurance is always gonna be there. We're always gonna have to do some sort of systems assurance of how we do that. We're always gonna have designers. But the UI design or, um, the, the, um, the HCI design elements of this. Um, we are already talking now about the making~ uh, ~users of these systems have better situational awareness, situational understanding, which has always been an HF problem.

Barry Kirby: When you are looking at any sort of, um, interface, any sort of display, any sort of output where decisions have been made that are not immediately obvious, that you've gotta be able to dig into these type of things and make sure they're, they're available. I mean, one of the interesting problems I was having a chat about the other day is.

Barry Kirby: What happens when, not at the moment when we still questioning everything that AI does, but in about, I dunno, five or 10 years time when we are using some of these tools and we don't interrogate where the knowledge comes from. We don't think that the knowledge base could have been poisoned or, um, compromised in any way.

Barry Kirby: And we just [00:15:00] take for granted in the same way as we now use, you know, ~map, uh, ~mapping and um, sort of like Google Maps and um, them sort of things. We used to, when you put in the destination about where you want to go to it would show up your route and you're like, and you would query or look like, why is it sending me along that route?

Barry Kirby: Why is it doing this? Why is it doing that? We don't do that anymore. ~We just put, ~you just put your destination, you press go and you start following your turn by turn directions. And very few people now question why I'm going by a certain route. ~The, when it, ~when it's now ~shown, uh, ~given to you by your mapping software, ~we going, ~if we're not careful, we're gonna be there in the same place with AI because people who really ~I guess ~trust ~what it's, what it, ~what the machine is ~p ~poking out at you is absolutely right.

Barry Kirby: And the analogy is there ~with ~with air crew, ~um, ~with pilot. ~The, ~the amount of~ I would say probably about ~civilian military, there was a huge drive on ~as, ~as you get more and more information within the cockpit, that pilots would be looking heads down at the systems rather than looking heads out at ~the, uh, ~the airspace [00:16:00] around them.

Barry Kirby: And so more and more accidents are happening because there were heads down sucking in the data, not actually looking at ~how to ~where they were going. ~Um, ~so ~that ~more and more was being done to make sure that pilots could be heads out. And we're seeing that ~in, ~in vehicles as well. We've gotta be in the same place with ai and autonomy that, ~um, ~if we're not careful~ we, ~people will rely too much on it without questioning what they're actually seeing.

Nick Roome: Yeah. ~I wanna, ~I want to go back to your exercise of sort of thinking about these roles in their own lanes. Because ~I think, ~I think you have it pretty close here, right? That ~there's, ~you essentially have four, ~so 3, ~3, 4, 1, 4, 3, 4. You have ~four, ~four different roles here. So ~you've, ~you've combined the seven down into four.

Nick Roome: And to think about, ~you know, ~from a human factors perspective, ~what, ~what is it that our role will be over time? I think that we have, um. Is this a pottery term? We have fingers in all these pots. ~Is that, ~is that a pottery term?

Barry Kirby: I think Fingers.

Nick Roome: Fingers in all the pies. [00:17:00] Pies. Oh, okay.

Nick Roome: Alright. Baking. We have fingers in all these pies. Like let's think about, um, used to pottery on pottery on my mind now. Let's see. So thinking about what we would actually do, right? Let's think about the AI decision auditor. And I think about this in two ways, right? I think about the, the way that the author has described this as they're questioning and validating the AI before implementing.

Nick Roome: But I also wonder if there's you know, some auditing that goes on after post, post AI decision, right? ~There there's they, uh, ~they cite here human oversight for high risk systems. But human oversight without the expertise to question is theater not governance. And so I think, from a decision making perspective, we will need to provide better messaging when we, give those outputs.

Nick Roome: In terms of human accountability architect, this is an interesting one. I don't know if [00:18:00] we necessarily have that much sway here, but it is an interesting question of who's in, who's responsible for, a bad decision. Mm-hmm. Um, and that's a direct result of number one, right? You have a decision auditor who's looking at the decision that maybe an AI made or maybe the decisions of implementing that ai.

Nick Roome: Are they at fault? Because they're the ones who recommended the program, is the person who made the decision at fault because they are the one that ended up, ~you know, ma ~making the call. They were the human in the loop. So the accountability architect, I think we need to really be careful with this role and, and say humans it's difficult to talk about because if the human doesn't have all the information, are they really at fault?

Nick Roome: Like, this is a huge ethical one here.

Barry Kirby: It, it's an interesting one, isn't it? Because. This leans very heavily into systems of systems engineering. Mm-hmm. Um, so it's that whole SOA piece. And ~if you took, ~if you took out the [00:19:00] term AI and you have multiple systems working together, ~and ~where does the human responsibility sit within them?

Barry Kirby: ~So that we'd have a, ~if you've got multiple systems ~you'd work out or ~you should work out as part of ~your ~your HFI, um, or HSI type work~ you should be able to work out ~who has responsibility where, and ~to ~make sure that the human in the loop decision making is happening with each individual system.

Barry Kirby: And then create ~a ~almost a hierarchy or a throughput of that for your social system. And so really all we're saying here ~is with this, ~is that some of them systems are ai and so technically the thing already exists, which really sort of kills my argument for the separate ethical architect. ~Um, ~really ~this is having, you know, ~delving into this a bit deeper.

Barry Kirby: Is this just the systems engineering role or ~is there, ~is there a systems engineering ethical element of that? I think there is. I still think that in today's systems given where we're going, ~I think ~there is a role for an ethical architect ~or, ~or an adjudicator ~or, ~or something of that role, which could wrap into your [00:20:00] H-F-I-H-S-I.

Barry Kirby: Manager role, I think it's possibly a bit specialist. 'cause there is a leaning there to, there's ~a legal ~a humanitarian bit as well, which really doesn't naturally fall into our training. But this whole, I think the, my almost go-to test for a lot of this, especially when you're doing say task allocation, task analysis and things like that, is what happens when you take the term AI out of it, out of any sort of phrasing.

Barry Kirby: Take the term AI out of it, what does it sound like? And in this case, for me, this is just soa. And, and, and how that runs.

Nick Roome: Yeah. I'm gonna move down the list here to the next one too. ~I, ~I think you're right, Barry. But you know, along with those multiple systems, ~here's, uh, once again more systems.~

Nick Roome: You have AI agents ~Yeah. ~Collaborating, and then you have the human working alongside those agents ~or, uh, overs overseeing ~or overseeing those agents. And again I see that as just how does the human integrate with the system? You have the AI risk steward who is looking for drift degradation, bias emergence.

Nick Roome: This is an evaluation role. This is [00:21:00] somebody who's evaluating the performance of a tool. And how you set up those metrics, ~that ~that will be on the person. But now this is done by folks like cyber, right? ~Yeah. This is, ~this is somebody who's in cybersecurity ~looking at, what, what type of information does it have access to? Is it providing the right, uh, sorry, cyber or it, they're ~looking at whether or not this is an appropriate model. ~Uh, ~I think there's ~prob ~likely ~some, ~some human factors going on with there too, like ~picking the right model for the tool that are, for ~picking the right tool for the task that's at hand.

Nick Roome: I think when you look at things like the responsible AI implementation strategist, I think you got it right there. ~It's, ~it's what you call ~is ~the ethical architect, but I think also systems engineering here. Drift integrity analysis or an analyst. Once again, this is measuring performance, this performance evaluation of a model and seeing if there needs to be an upgrade to the model or change in the model.

Nick Roome: And then the last one here understanding explanations, the AI decision making into understandable explanations. That's human factors. That's core human factors. [00:22:00] Now, I will say~ I, ~I feel like I've been quite critical of this author so far, and I want to let up just a little bit because I think their main message isn't necessarily, these are the seven jobs that are coming.

Nick Roome: I think the main argument that the author is trying to make here is that. Hey, we should be paying attention to these. And they use the analogy of salt in the cake.

Nick Roome: And what does that analogy mean? That analogy means that once you bake a cake with salt, you can't undo that. You have to start over with your cake.

Nick Roome: And so I think what this author ~is, ~is recommending is that we start to think about these things early and often when incorporating AI into our workflows, into our jobs, into the way that we conduct business. And I think that is a very good argument for human factors. And so I'd like to thank this author for doing that.

Nick Roome: You know, ~I ~we disagree ~on, ~on the roles here, ~uh, ~or on the jobs that would be created. But I think ultimately we agree on the [00:23:00] concept that this stuff needs to be considered as we start to move forward with this stuff. Otherwise it's not going to, they're right. If you don't bake in this stuff early, ~it's, ~you can't unfix it easily.

Nick Roome: I would say it's not quite the same as baking a cake with salt. 'cause some of these I think could be applied post hoc, but there are some things that need to be applied as you're thinking about the process as a whole. ~Yep.~

Barry Kirby: On I would. One another good thing that they bring out I dunno if you can bring up the graphic again, but scroll all the way down to the second graphic that's there is where they talk about AI governance and some of the skills that, um, that's the one that are there.

Barry Kirby: Because actually I think this is, I mean this is just, this is good systems engineering again. But again, it's, it is stuff that we will do need to be thinking about. So it's accountability, it's, um, legal versus moral. So that goes into the, um, the ethical side of things, [00:24:00] systems thinking and adversarial thinking.

Barry Kirby: This is something I I talked about~ in twice ~in the past two weeks around good systems engineering ~gets you to, ~gets you thinking about outside agents. And what we always forget is ~there's that, ~there's adversarial agents, there's threat actors and more so now with ai 'cause there's more places to be able to do that sort of thing.

Barry Kirby: So we need, we do need to bring that back into systems engineering provenance and challenging what's going on. So again. What I really like about what ~this, the, ~the author has done here is really facilitated this discussion. And in reality we as, as I, I think I said right at the beginning, we as HF practitioners should really be looking at this and saying, right, what does this mean to us?

Barry Kirby: How is our job changing? And what are the skills ~do ~that we need as part of our basic bread and butter? ~What, ~what do we need ~in the ba in, ~in our standard toolbox to be managing and doing HF in an AI world? Because we can't ignore it. ~The, the, it, it's, ~it's here. In fact, it was here yesterday. It's gonna be here tomorrow.

Barry Kirby: And we are the ideal [00:25:00] people to be doing this if we take that challenge.

Nick Roome: Yeah. ~There's, ~there's so much more that I wanna talk about with this. And hopefully we can fit it into the next 10 minutes or so. So we can, we have some time at the end. ~Uh, ~if we go a little longer, that's okay. But ~I, the, ~some of the things that I wanna talk about here are really one, what are the jobs that are missing from this?

Nick Roome: And there's, I don't know if there's any that are obvious to you, but something that I've been thinking about a lot lately is the way in which for the better or for worse, our jobs will change because of how systems need to be designed, not only for humans. They need to be designed for agents and models as well, right?

Nick Roome: For these models to do something. Well, they need access to the data that a human is requesting and ultimately the models and how they perform behind the scenes are going to have [00:26:00] impacts on the human performance at the other end of those results. And so for me, one thing that I think is not necessarily here is a designer that ~is, ~is solely focused on how an agent or a model might interact with data.

Nick Roome: And I don't know if that's necessarily our role as human factors practitioners. ~I don't, ~I would argue that it is strongly not our role, but we would definitely have some say into it because the way in which that model or agent accesses that information and then relays that information to the human at the other end of it is really important and we wanna make sure we get that right.

Nick Roome: Right. ~I, ~I think that is more of sort of the systems engineering approach as well, but it's something that I don't see mentioned here that is being thought of. Are there any roles that you see or that you've been thinking about that are not necessarily represented here?

Barry Kirby: I was playing around with the idea of agent managers.

Barry Kirby: So if we have a good [00:27:00] AI agent for a particular role how do we manage that in such a way that is reusable safe? And so it going back into that, that threat actor ~type ~type scenario as well. So ~how, and I don't, I don't think it's an HF role, but I think it is a role. So how do we, ~how do we look after an agent that's, uh, an AI agent that's out there, either in the wild or, or however we deal with that?

Barry Kirby: I also slightly playing with it a bit what about historians? How do we, 'cause every time we play with the AI and regenerate agents and regenerate things or the ability to regenerate things on the fly, how do we keep, how do we keep the historical record of what's gone before? And so we make sure that we are learning from experience and not trying to build new every time.

Barry Kirby: And I've sort of called it historian in this to sort of make, in, think about the historian in the, in the wider sense about the people who, um, no down what's happened in history. To try and bring that forward. I, I just think there's [00:28:00] something there about, yes, being able to go a bit like we use, say LLMs at the moment, rather than go back and look at, um, maybe previous chats, uh, to do something which, you know, you've done before, you'll probably just chuck it into a new prompt, um, and say, you know, do it again.

Barry Kirby: Rather than scrolling through your history 'cause you, you dunno whether it's there. We almost need ~some ~something to be able to easily check when somebody says, oh, I need something to do this. Oh, we've done that before. You can reuse that ~and, ~and work forward. So there is an element of that. And then there's the how do we, if we again, have this approach with AI ~that ~that it can regenerate, it can learn ~it ~and ~go, ~go over time, how do we identify best practice?

Barry Kirby: What becomes best practice with ai and ~is that, ~is that how we generate it or how do we know what is best practice within the AI itself? And ~I don't know, ~I don't know quite what I ~mean with ~mean by that, if I'm brutally honest, but I just know that there is ~an, ~an element of wider learning that's not just learning within ~it, with within of ~itself.

Barry Kirby: So, [00:29:00] yeah, ~that, ~that's sort of what I've been playing with.

Nick Roome: Yeah, it's interesting and, and I think, ~beyond. ~Beyond the roles themselves. I think there's, you know, ~I, ~I always like to link it to how are we as a society and how is culture sort of responding to this? I think there's likely going to be what's the best way to put this?

Nick Roome: ~Like, sh ~like a shift in trust. Yeah. I think ~there's ~there's already a large subset of people that will take what AI says~ at, ~at face value. And I think this is important for us as human factors practitioners to come in and say, you can't just trust everything. We gotta do some training.

Nick Roome: We have to basically provide them with scenarios where potentially ~you're, ~you're showing them how it could get something very wrong. And I wonder if, you know, when you sit down and watch like the HR training videos, I wonder if there's going to be training videos on ai. There might already be some, but ~to, ~to really drill it into corporate [00:30:00] that this, you can't take what AI says at face value yet.

Nick Roome: ~Uh, ~it will get better, but it's not there now. And so I wonder if we are going to be shifted towards a society that trusts less. Which I think is dangerous in a lot of ways, but let's not get too much into that. I think there's also just generally from my perspective, I see ~a, ~a societal trend towards negative~ um, ~attitudes or feelings towards ai.

Nick Roome: Mm-hmm. And ~a lot of this trend, ~a lot of this trend, I will say is sparked by some of the creative tools that AI is focused on right now. The image generation, the video generation, the creative writing, right? Those types of things I think people look at and go, ah, so art is dead. I don't think so. I really don't think so, but I can see the concern when somebody can whip up a video in half [00:31:00] the time that it used to take, because now they're just spending their time on prompts and figuring out the right lighting and all that stuff in a prompt versus actually getting all the resources to make that shot.

Nick Roome: I wonder if we will see that narrative change once AI ~will you know, the, the, the, ~the narrative ~now, right ~now is AI will take your job. And ~I ~that is so beyond what is actually happening. ~It's, I, and ~I think we've said this on the show before, it's like. AI's not gonna take jobs as much as it will change your job.

Nick Roome: And ~I ~I wonder if there will be a shift, a cultural shift once, AI reaches ~the, ~the worker class, you know, ~are, ~are there going to be AI built ~into ~into retail systems where ~when you're act, ~when you're at the register and somebody has purchased, a number of things, and then there's an LLM on the cashier's thing that says, would you like to also get this, that is tailored towards that person's interests based on what they [00:32:00] just bought?

Nick Roome: And then they have to recommend it to 'em. Like, there's gonna be ~a ~a and please corporations do not take that idea. That's terrible. Do not. But you know, I am wondering, will the narrative change from AI's taking jobs to AI is changing what jobs are?

Barry Kirby: I think there is ~definitely, there's ~definitely something to that because ~again, all the way, ~all the way through history as technology itself evolves, jobs change.

Barry Kirby: Um, and some jobs do go away. So take coal mining as a, i ~I live ~lived in Wales. ~Coal mining. ~Coal mining was a massive industry here. But as you then got into more and more, or you know, machines getting more and more involved in that type of thing, then less and less workers are required because it's dangerous work.

Barry Kirby: So send a machine in. And it's also more efficient. It could work 24 hours, all that sort of stuff. So I think with some of the tools you highlighted~ so ~taking more of the creative jobs, I think there is, ~um, there's ~definitely ~a ~a concern there. However, is it [00:33:00] realistic? Because some of the stuff that we use AI for now, so I mean in image generation, so we're saying we're taking it away from artists, you've gotta ask yourself the question, would I have done that in the first place?

Barry Kirby: So I'm using some image generation as part of maybe some PowerPoint presentation I'm doing. Would I have gone to an external artist or would I have just ~ed ~together A simpler diagram? Turns out I would've ~ed ~together a simpler diagram or not bothered to do it in the first place ~or, ~or just not done it.

Barry Kirby: And also historically, you would've gone out to a person, before PowerPoint existed, you'd have gone out to somebody to create your 35 mil slides ~to, ~to put through a projector. What we had then was you just had ~less ~presentations. A presentation was a really posh thing to have. Whereas now everybody creates a presentation at the ~job for ~hat.

Barry Kirby: It's really easy. And so AI is going to facilitate us doing new and better things and do things quickly and, and things like that. But it's still just going to be a tool. It's not going to sort of ~tick ~over the world. ~So you, ~we are gonna find out new ways of working, ~what we can do ~[00:34:00] what we can do with it, ~and, ~and that type of thing going forward.

Barry Kirby: ~It's not and ~this is why it's going to be ~a, ~a revolution. And I think more so a revolution than an evolution because we're gonna find new things that we didn't know we could do in the way that we can do it. The point you made earlier around prompt engineering already disappearing. I mean, you actually made the point really beautifully.

Barry Kirby: In the last show when you were creating the, ~um, the, I ~image that small bit of video using soro two, you went into chatt PT first ~to tell it, ~to create the prompt for you to more ~optimistically put it into ~optimally, put it into SORO two. And so you are using one tool to generate the perfect prompt to go into the other tool because the other tool ~was site ~was more immature.

Barry Kirby: ~And so we, ~and we've seen the same with, ~you know, coding, um, ~coding ~of itself. ~More people now have access to, ~I get ~complex coding because you can ask chat gpt to do it for you or you can ask, ~um. ~The Microsoft 3, 6, 5, ~um, ~version of chat gpt to do it for you to put functions together and to put ~wider, ~wider examples together.

Barry Kirby: So it is about how does the job change ~rather than it, ~rather than replacing completely. But some of them jobs might get changed very radically. Yeah. [00:35:00]

Nick Roome: Alright. I think we should probably move on. Barry, final thoughts on these seven new roles for ai?

Barry Kirby: I think we've quite conclusively said that these roles in as of themselves may not be the ones, but definitely from a human factors perspective, we need to learn these functions.

Barry Kirby: We need to learn these tasks and they need to be, become part of our bread and butter day-to-day way that we work as HR practitioners.

Nick Roome: Yeah I agree. ~I agree with that assessment. ~I think some are missing, but this is a good start. And I don't think they're all roles, I think they're hats that we wear.

Nick Roome: And like you said, we need to incorporate those into our day-to-day. ~And, ~and companies and businesses need to incorporate those into their planning and strategy. Okay. ~We were, ~we are gonna take a, ~uh, ~quick break and we'll be back after this ~for Summit came from, ~see what's going on in the human Factors community.

Nick Roome: Oh. And thank you to our friends over at UX Collective for our new story this week. If you wanna follow along, we do post the links to ~all of, ~all the original articles in our Discord where you can [00:36:00] join us for more discussion on these stories and much more. We'll be right back.

Nick Roome: Yes, as always, huge thank you to all of our patrons. You truly keep the lights on over here and you keep the show going in the background. We use our Patreon funds to pay for a lot of the overhead that is required to host to broadcast, to do all the things that a podcast might do. So thank you so much.

Nick Roome: This show is truly supported by listeners like you. Uh, we don't have any sponsors at this time. And so really it's funded by a small group of people that support us on Patreon. If you'd like to, ~uh, ~patreon.com/even factors cast, we'd appreciate your patronage. Alright. It's been a while since we've done this, Barry, but let's go ahead and get into it.

Nick Roome: ~Came from, ~

Nick Roome: ~Every time. That's embarrassing. I feel like with ai, I should probably make something better.~

Barry Kirby: I like that. I think it do you to the point. ~It's, uh,~

Nick Roome: it's so basic. All right. Air slippers. Okay, look, we didn't get to, it came from last time because we were talking about all the news that we had missed [00:37:00] over the last year.

Nick Roome: Good news for you. We also missed a lot of it came froms over the last year, so we might be pulling from stuff for quite some time for it came from, ~but. ~This week we have a couple of interesting ones and a new one from, uh, from ResearchGate, which is interesting. So ~let's, ~let's tackle the one from ResearchGate first.

Nick Roome: The main question here, this is from Gassen and I, the main question here is how can generative AI reshape creative literacy and redefine the role of design educators in fostering truly augmented creativity? And ~I, ~I thought this was a great one to compliment our story tonight. ~Mm-hmm. Uh, ~there's more to this, but the two main questions.

Nick Roome: How can design teachers foster trans literacy and critical use of AI as a creative partner rather than a substitute? And what new forms of co-authorship are pedagogical roles emerge in [00:38:00] this age of intelligent co-creation? Barry, what do you think?

Barry Kirby: It's a really cool question because, um, I, I don't think it's, it's, um, restricted to design creation either.

Barry Kirby: It is, it, how do we engage and use generative ai throughout industry, but fundamentally, I've seen rather depressingly, I've seen a lot of. Education or a lot of educators complaining about the use of AI and saying, you know, as soon as they see any of their students using AI, that they will stop doing it, and things like that.

Barry Kirby: Whereas I've seen some more, I would say more enlightened ones saying we need to ~show them how to use it, ~show them how to use it, responsible, how, how to engage with it. So you then, getting to the heart of this, how do we look about how we teach user and teach design, bring in how do we use the tool?

Barry Kirby: Firstly, we've gotta teach the teachers what can it do? How can it do, what are the, what are the pros? What are the strengths? So we we're not using chat GPT to do calculators and things like that. [00:39:00] So, um, how do we make that work? But then, also then, so then they can work with the students to say, look, use these tools.

Barry Kirby: Use, use it, use it together. I think the co-authorship idea, I'm, my jury's still out on this, so I've seen a few papers, I've seen different things sort of saying, you know, bill wrote the the authors were Bill, Bob, and Chuck, GPT. I don't know whether I think that chat GPT has done enough to be an author of it, given that it's not actually doing any thinking of its own.

Barry Kirby: Right. It's just sorting out information for you. 'Cause when, why aren't you also then crediting Microsoft Word, which did all the typesetting for you. Why aren't you attributing, um, Microsoft in its entirety or Apple because you use their platform to, to write it on? And so I might be being slightly facetious, so ~I, ~I think ~the, ~this intelligent co-creation isn't necessarily a thing.

Barry Kirby: ~I think still it's, ~I think it's still just a tool. I think later on down the line, when we get to a GIE type [00:40:00] stuff we might be in a different place. But at the moment I think it's, ~it is ~still a tool. Let's use it as a tool. Let's teach people how to use the tool properly and use it to, um, encourage people to be, uh, more creative and to have a better creative output.

Barry Kirby: What do you think?

Nick Roome: Yeah. Uh, I agree that there are likely going to be shifts ~in, ~in dynamics in how we use these tools for things, right? ~I've al, ~I've already found myself using a lot of this tool for show prep, right? It's not necessarily that I don't have my own thoughts. I do. I use them as prompts to say, how can I think about this in different ways?

Nick Roome: And I often have a challenge not only the viewpoint of the articles, but the viewpoint of myself, right? What are some things that I myself might not consider? And I do that as a way to bring more voices to the table. And so ~the way, yes, ~the way it will change is that we will use [00:41:00] them as tools to augment and offset our current things.

Nick Roome: And I think the interesting way in which we'll use them, I think you were kind of getting at this, is that the margins by where we will use them changes. You might not want to make an fancy graphic for a PowerPoint presentation, but that might be something that AI generates, with just the input of the slide.

Nick Roome: And so boom, it's there. And does this make for more engaging content over time, maybe. But it's there and it wasn't there before. And so I think we're seeing these emerging trends of, well, now we're actually finding gaps in the things that we would normally not do, or the things that ~we're, you know, ~traditionally didn't have time for but maybe wanted to do that are now becoming easier and easier when we can offset it to an AI tool.

Nick Roome: Now, in terms of. Education, and I think this is a big conversation in this space. I'm going to point y'all to Nate b Jones. He has a substack and he's a great follow, [00:42:00] if you like anything, ai. He's on TikTok, he's on YouTube. ~He is, he's, ~he is a great AI leader when it comes to thinking about how we interact with ai.

Nick Roome: Uh, I can't recommend him enough, but he has ~a, ~a most recent article about how to talk about AI at Thanksgiving. And I think ~there's ~there's one thing specifically that he brings up about cheating and education and using them ~in ~as tools in the classroom. He brings up some specific numbers here.

Nick Roome: ~That, ~that this is not imagined. 43% of college students have used AI tools, 89% of them using them for homework. UK universities caught nearly 7,000 students cheating with AI in the years 23 to 24, ~uh, ~which is triple the prior year in one test. ~Uh, ~university of Reading 94% of AI written submissions went undetected.

Nick Roome: But here's some important context, right? Stanford runs a long running research on academic dishonesty. There's 60 to 70% of high [00:43:00] school students reporting cheating in some form before chat GPT. So the tools are changing. The behavior doesn't. And I think acknowledging ~that, ~that these tools are real, it brings up the whole calculator argument.

Nick Roome: It ~doesn't necessarily teach or ~doesn't necessarily change ~how, ~what they taught, but how they teach it. And so I think we're probably going to see something similar here. We're still on the early end of this. It's only been a couple of years. We have to,

Announcer: we

Nick Roome: have to adapt and we will. But ~I, ~I thought those were interesting points.

Nick Roome: Again, Nate b Jones, he's a great guy to follow. ~But I, ~I echo those. I think, you know, when we teach about these roles, ~we have to be, ~it has to be baked into our education as well, right? The way that we talk and speak about ai, ~as, ~as with the advent of the calendar, with advent of word processor, with advent of whatever tool, we're now learning those in the classroom, and we have to.

Nick Roome: All right.

Barry Kirby: Any final thoughts? I think that, ~just one more. I think the, ~perhaps we also need to rethink how we look at education. Education [00:44:00] at the moment is so baked around having some input and regurgitating it out onto paper, digital paper or otherwise. If the problem is now the LLMs are making that.

Barry Kirby: ~E ~easier to access ~for, ~for that type of cheating, shall we say? I think it's, ~um, ~actually using the right tools for the right job. ~Maybe, ~maybe educators have been lazy and need to think about different ways of, ~of proving or ~allowing students to prove that they've understood ~the, um, understood these that, that less than ~the assignment.

Barry Kirby: So do better ~do better.~

Nick Roome: All right. We had two more, but I think we'll skip those and put 'em in for next time because we're kind of at the tail end ~of here ~of the show here. But I'm glad ~we ~we're getting back to this. It came from thing, this was a really great question and one that I thought went really well with tonight's discussion.

Nick Roome: Yes. All right. Now time for a segment that requires no introduction. It's one more thing. Just talk about one more thing, Barry. What you got?

Barry Kirby: I'm probably gonna repeat the same theme of my one more thing forevermore. So we're back to pottery again. As I mentioned last week that ~I'm, ~I'm really big into this.

Barry Kirby: [00:45:00] Tomorrow is my first ever sale that, ~um, ~over the past six months I've accrued so much potter that I've made that, ~um, ~quite frankly, people are getting quite sick of it. It's stacked up in places, all that sort of stuff. And so, ~I am. There's, ~there's a charitable, ~um, ~Christmas fair going on. I got asked if I would consider selling my stuff there.

Barry Kirby: And so I am. And I thought it'd be dead easy 'cause you know, you just chuck it out there. Took some prices on it, but I suddenly realized it's gonna be a first time in a completely different field that I'm just gonna have pure judgment because people are either gonna buy it or they're not. Or they're gonna be quite easy to turn around and say, well, that's not very good, or It's very heavy, or it's very blah, blah, blah, blah, blah, blah.

Barry Kirby: And there's gonna be a hard measure at the end of it. Am I gonna sell anything? Because ~if I come back I've, I've stacked up and, and priced today, all the bits I've got, ~if I come back with all the bits that I've. Go down with it. I don't sell a thing. I'm gonna be, I'm gonna be quite sad.

Barry Kirby: So tomorrow evening could be quite a sad time for me. We'll see how we go.

Nick Roome: So is ~this, so this ~this event, ~is this ~just for pottery selling or is this for a multitude of different types of

Barry Kirby: items? It's a multitude of different types of items. So it's put on by our local scout group. [00:46:00] And they are, you know, there's lots of different people.

Barry Kirby: ~I think there's got very, ~so it's an artisan fair really. So there's gonna be lots of different types of things there. I'm probably gonna be the only pottery person there, I think. So yeah. We'll, we'll see how that goes.

Nick Roome: Well, that's great. 'cause one, they have no point of comparison, and two, they're not professional judges, so I think you're likely to get some sort of leeway there.

Nick Roome: That I've also priced it really cheap, so

Barry Kirby: it's easy. Christmas presents easy, easy. Yes.

Nick Roome: Well, if you're in the neighborhood~ go, ~go buy Berry's pottery

Barry Kirby: because ~I need, I need, ~I need shelf room. Nate, what about you? What's the old one more thing?

Nick Roome: Uh, I've been playing around okay. The, the theme today is, is ai, right?

Nick Roome: So I've been playing around with this idea of are, are you familiar with CLA skills? No, I'm not. So ~the way that, ~the way that Claude skills work is you basically bake in instructions for an AI right? So this would be like a prompt. You give it that prompt, and then you could basically give Claude a prompt that says, Hey, and [00:47:00] use this skill to do it.

Nick Roome: And then it will go to that skill and pull that prompt and perform that prompt. And you can, you can chain things together. You, you know, like, um, I, I will give you context for this, but, and the use case that I'm trying to do, right? So the way that I've been doing this is that I've been building a custom GPT and I've been uploading it with text documents of prompts.

Nick Roome: And for an example here in the preparation of show notes, right? You can probably tell we, we massage the blurb after it comes back, but we, we have a blurb that comes back in our typical voice. We fed it a bunch of information and had it come back. So that way it, it comes back in our style. I also have a way to generate titles based on our previous titles.

Nick Roome: And I also have a thing that will generate some talking points, right? And these are all different [00:48:00] prompts. I've also created one to look up a royalty free image that is on Pexels, right? That is representative of the thing. So these are all different prompts and these are all ~in the assistant, ~in the assistance of creating show notes.

Nick Roome: Notes. Now what I've done is I've put them all into a GPT, and what I can do is feed the article that we're talking about in once, and it'll come back with everything. So I can say, give me a title and it'll go through, read the title script, and then it'll give me three options for the titles, depending on, what the content is.

Nick Roome: And then it'll go through, and ~then it'll ~write a blurb for it. And then it'll go through and ~it'll ~do the analysis, and then it'll ~go through and then ~do the pexels one. And they're all different prompts, whereas before, if you wanted to put 'em all in like a mega prompt, that would be something insane.

Nick Roome: But now it's just~ it's ~calling the reference. Uh, it's kinda like ~a ~RG, which, ~uh, ~I'm gonna get the~ the augmented retrieval something retrieval generation, ~augmented retrieval generation. But basically ~it's, ~it's calling these prompts and pulling them in, doing the thing, dumping it out, [00:49:00] pulling the next in, doing the thing, dumping it out, and then putting it all into one input.

Nick Roome: And it's really cool. It's very cool if you have a lot of things that you use one input for. So if you have like, I don't know a picture that you wanna dissect in different ways and mm-hmm. You have like, oh, show me the emotional meaning of this picture, and then show me the, ~you know, ~physical components of this, but describe this picture.

Nick Roome: You could add those as different prompts. And then. Dump them into a GPT and then have it do its own thing. Now there are some drawbacks to this, but I just get super excited about it because it's a new thing that I've been playing around with, and it's a new way to think about the problem solving that AI can do for you.

Nick Roome: Okay. Yeah, it's cool. I'll, I'll, I can show you too. Alright, well that's it for today, everyone. If you like this episode and enjoy some of the discussion about ai, I can encourage you to go listen to any of our other episodes on ai. I promise the conversation will continue there. Comment, wherever you're listening on what you think of the story this week for more in-depth discussion, you can join us on our Discord [00:50:00] community.

Nick Roome: Yes, there's, uh, stay up to date with all the latest news, all the latest human factors news, visit our official website, all that stuff. If you like what you hear, you wanna support the show, there's a couple things you can do. One, wherever you're at, you can leave us a five star review. That helps. Two, you could tell your friends about us.

Nick Roome: That helps even more 'cause they trust you and you trust us, right? Maybe, I don't know. And three, as I mentioned earlier, you can consider supporting us on Patreon. This show is truly supported by listeners like you. And if you wanna help support the show and help us grow, uh, you can do that too. As always, links to all of our socials and our website or in the description of this episode.

Nick Roome: I thank Mr. Barry Kirby for being on the show today. Where can our listeners go and find you if they wanna talk about your pottery spoils?

Barry Kirby: Well, if you wanna find me about pottery, then you're going to go on Instagram, because I'm there@mrbpkirby.com. If you wanna talk work stuff, then no, I'm now mostly around LinkedIn and Facebook.

Barry Kirby: Um, if you wanna hear some of the interviews that ~we did ~we've done up to the end of 2025, then [00:51:00] find me at 1202 podcast.com.

Nick Roome: As for me, I've been your host, Nick Rome. You can find me on our discord and across social media at Nick Rome. If you're watching live, stay tuned for the post show. Thanks again for tuning into Human Factors Cast.

Nick Roome: Until next time. It depends.