Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
June 23, 2023

E286 - What if you could talk to NPCs?

On this week's episode, we dive into NVIDIA's generative AI that allows gamers to engage in conversation with NPCs, and our community's questions about their experience with HF and thoughts on voice agents and user interfaces. Learn about exciting advancements in technology and discover what our listeners are curious about.

#technology #AI #gaming #NPCs #HF #voiceagents #userinterfaces #podcast #discussion

Recorded live on June 22nd, 2023, hosted by Nick Roome with Barry Kirby.

Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on Artificial Intelligence in Hospitals - An interview with Kate Preston:

https://www.1202podcast.com/kate-preston

News:

 

It Came From:

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

Transcript

[00:00:00] Nick Roome: Hello, everybody. Welcome back to another episode of Human Factors Cast. We're recording this episode live on June 22nd, 2023. This is episode 286. . I'm your host, Nick Rome. I'm joined today by Mr. Barry Kirby. Hello, and it's good to be back after such a long break. It has been a long break and we are still figuring out how to come back to podcasting, so stick with us.

We got a great show for you. We're gonna be talking about NVIDIA's generative ai, and even beyond that, some AI applications to NPCs and all that fun stuff allows gamers to converse within NPCs. And we'll talk about some more of the applied human factors e stuff. Later on, we'll be taking some questions from the community, including what you've done in human factors and your thoughts on voice agents and voice user interfaces.

But first we have some programming notes. Yes, like Barry said, we took a little unplanned summer hiatus, but we're back. We're back. And I usually do this at the very end of the show, but since we're back for the first time, if you'd like to leave us a review, tell your friends about us or support us on Patreon.

We'd appreciate any and all of those, so maybe consider doing that. But Barry, I have to know what is the latest, what's been going on over at 1202? So at 

[00:01:11] Barry Kirby: 1202, we were talking about artificial intelligence in hospitals. Kate Preston, friend of the show she's been on with, on this show with us before.

And she's doing her PhD around the use of artificial intelligence in hospitals. And we had a really great discussion around, I thought we'd get into the nitty gritty of the tech side of things, but actually there was a load of funny enough human faculty stuff that came up around, around organization and some of the things that were inhibiting the use of, or the use of AI that is a, has got nothing to do with the technology itself.

So really fascinating. Live now and thought, recommend you go have, listen. 

[00:01:45] Nick Roome: Thanks for that Barry. And just one more I guess thing here at the top. If you haven't been joining us for our pre and post-show online, we do these live every Thursday. We have a pre-show and a post-show. Tonight's pre-show was especially great because we had one of our lab members, Alex, on with us, and they were bringing up a bunch of great points about the story that we're about to talk about.

And so you're missing like half the conversation if you're not sticking with us on our live shows. So come join us on all platforms that you can find us on every Thursday at 2:00 PM Pacific's Pacific Time. That's what, 10:00 PM your time, UK time. 10:10 PM UK time. Yes. Come do that. Anyway, for now, let's go ahead and get into this news story.

That's right. This is the part of the show all about human factors news. Barry, what is the story this week? So this 

[00:02:31] Barry Kirby: week, NVIDIA's generative ai let's games converse with NPCs. So NVIDIA has revealed Avatar, cloud engine, or ace, a new technology that allows gamers to have natural conversations with video game non playable characters, NBCs and receive appropriate responses.

ACE can run both in the cloud and locally using large language models that can be tailored with character backstories in law whilst using guardrails to avoid inappropriate conversations. The technology also uses NVIDIA's Riva as speech recognition and speech to text tool and Omniverse audio to face to create facial animations that match any speech track.

Nvidia partnered with convey to build a demo called Kairos, starring a playable character in a dystopian Raymond shop. During the demo, the player has a natural conversation with an MPC named Gin. The dialogue is not perfect, but the idea shows that the players can just speak into their headsets and the MPC will reply appropriately.

Though Nvidia did not announce any games that will use the technology, stalk two, heritage Noble and for Solace will employ Omniverse Audio, to face. Although the demo's, visuals are more compelling than the AI dialogue, ACE is an exciting technology that has many implications for the human practice profession, including game design and development, psychology and ergonomics.

Nick, is this going to be the perfect thing that when we decide to take two weeks off, we'll just take over for everything and do a full podcast? N P c Styley. 

[00:03:58] Nick Roome: Maybe if we instruct it to act as us. Look. So I want to, I, I do want to comment on a couple things here. This reporting is by in Gadget and we are working off of two different articles tonight, although it only links to one of them in the show notes.

And I wanna just level set here. So we have the Nvidia News, and that's just like a little snippet of some of the things that they're talking about. And Gadget did post a follow-up article to this about AI's and generative AI in bringing tomorrow's NPCs to life. And this article that is posted has an overview of where it's been, where it's at today, and where it's going.

And so I just wanted to contextualize the stuff that we're gonna talk about today is through both lenses. We'll be talking about the tech, we'll be talking about potentially how this is done. We'll also be talking about the applications not only within video games, but beyond video games. So with that context out of the way this is cool.

I'm glad you all wanted to hear about this one. I felt earlier like I had nothing to talk about with this because it's yes, okay, fine. Yes, they're AI NPCs makes sense. Great. Cool. Move on. But that really just opened up a lot of can of worms as some of these often innocuous stories tend to do.

And this to me, the human factors applications of this are immediately apparent when you look at anything involving something like training. It might make training much better, but it also might make it much worse in some instances. And it's interesting to explore those and we will. But then there's also this risk of creating realistic, but maybe misleading or harmful conversations within the context.

And given this, idyllic scenario where there's these guardrails are put in place and everybody, everything's fine, you're not gonna go off those guardrails then it's not an issue. But when you start implementing it into it's the same risk that you have with most generative AI with large language models.

I am curious where your thoughts on this one are, Barry. 

[00:06:09] Barry Kirby: So for me, I think a lot of it is if we scope it so what are MPCs actually for? They, they add some flavor some extra Filler almost to games. Some of them have a a more distinct role in making sure that that you get the right prompts and things like that.

But it's very, it is very manic. It's they just say what they're meant to say within the games boundary and they, they fill the space. So to get a bit more realism into the way that they talk, into the way that they engage that's cool. I think that's going to be going to be quite decent.

But I guess for all transparency, I don't really play games. I've played them in the past, but not the sort of games that have like NBCs and engaging. So I was like, almost a bit like what you said earlier in terms of, this is gonna be a tough one for me to talk about, but a lot of the work I have done is within the synthetic training space.

And I do think there is a lot of value here in that because you have particularly if you're gonna do say, some work around infantry infantry training and things like that, where you might interact with the local populace. And you might not do something that is entirely structured. So most games have a structure into the way that they're developed.

The game designers want you to go down a certain way and do a certain thing. Whereas when you're do using these things for synthetic training, you tend to, it tends to be a bit broader. It tends to be, if you want to do something slightly more out the box with NPCs, that would actually be a bit more interactive.

It means that you could actually allow that to happen. Rather than it have to be a structured training session, you could actually say, if somebody decides to go and wander up over a hill and see if they could come round to a target in a different way using this type of technology, you could do that.

You could allow them to do that without having to tap them on the shoulder and say, oh, very well, but could you just go back down the hill? Cause that's where we want you to go. So there's a lot of things there that you, that I think it could be useful. Do you worry that, I guess to, to look at almost, not negatives, but considerations would be if you've got these MPCs that are doing a bit more stuff, what does that do for the gameplay planning and development and things like that?

Not being involved in game and design development? I don't really know, but it feels like it would almost make the job harder. Or it's got a potential for making that job harder. And then look on a on a more of a social side. I know that we, that the younger generations now get criticized for doing less and less face-to-face human interaction.

Cause a lot more of is happening along online. And a lot of this is happening through games like this. When you're doing warfare games and that type of thing, you are working together as a team. There's a whole lot of people online with headsets and that type of thing. If this, if the non-player characters are doing a really good job of pretending to be human, why do you need to then wait online for all your mates to be online, to have a really good gaming experience?

Could all your, all of your squad actually just all be MPCs and actually them do that really well? And so we were actually losing, my argument has always been actually, as long as you're communicating, whether you it's online or it's face-to-face, it doesn't matter. We need to get over ourselves.

This could actually ruin a lot of that human to human interaction potentially. Anyway, that was a very long round. I wanna round. 

[00:09:16] Nick Roome: Yeah, I, not to get down a rabbit hole already, but I wanna comment on that because what if we craft these interactions that are more enjoyable than human interactions?

What if seriously? Yeah. 

[00:09:30] Barry Kirby: In the grand scheme of things, if you've got somebody on your team who's maybe not playing the way that you want to play, maybe they've been a bit disruptive, maybe they've been slightly annoying or whatever it is, and you end up having to row with them and like, why can't you play the game properly?

Why can't, you're not taking it seriously as the rest of us. Or maybe the, you've got two or three people in your squad who were play just a different style to you and you find that a bit irritating. Why do you need to put up with any of that? You could craft your own perfect team.

That plays the style that you wanna say. If you wanna be the hero, they're the ones that they're providing you covering fire all the time, and you are the one going up and getting, they're basically looking after you. Or you might style somebody else as the hero type of character or or whatever, yeah I think there's lots that on the face it could be seen as a really good thing, but then also it could, I think it, there's a potential for it to we do sometimes straight into the negative in when we're reviewing AI type stuff. But it's so easy to do because I think we, everyone gets bought into the hype of, oh, it's amazing.

It's going to do everything. It's gonna solve problem. And then we're into human extinction. 

[00:10:29] Nick Roome: So let me get some of my criticism outta the way and then we can talk about the positives. So my, one of my main criticism here is that we're thinking, or at least the articles that we're referencing or thinking primarily from the perspective of the developers.

I think there's and that's where we come in, right? We tend to think about the end user and the interactions that they have with these NPCs. And so I'm thinking that ultimately there has to be some UX person at the end of this, or human factors person that is thinking about how. A player character is going to approach these NPCs what the entry point is and think about all the potential points of entry.

And it might be, first generation ai, NPCs, have some sort of prescripted thing. And then that is where it starts. And then you can go anywhere within the conversation. But the goal of that NPC will be to guide you to do X quest or whatever it is. And so having some consideration about how the user gets to from point A to point Z with all those different conversation points in the middle is gonna be an interesting.

Problem to solve. And I just think that when we look at this tech through the lens of these articles, it's mainly focused on, oh, this is gonna make a developer's life easier because then they can, spend more time crafting the story in the narrative. But what if this doesn't work? What if this competes with the narrative in some way, or breaks the narrative?

Is it going to be a stronger experience because you can interact with this NPC in a variety of ways, or is it gonna be a weaker experience? Because it then goes against the narrative that is trying to be told by this story. And so there's gonna be some interesting things, and maybe it's not for every NPC and it's for some select ones.

So that's one of the main criticisms I have about looking at these articles through that lens. There's. All the general AI related things, and I'm gonna try to stay away from like the doomsday stuff. We already did a whole episode on that. If you wanna go listen to it, go listen to it. Many of them.

But I think this is really interesting for a couple different reasons. So thinking about using tech technology to to create this naturalistic dialogue between humans and digital agents. Some, that's what I'm gonna call them because we're not just talking about NPCs here, we're talking about digital agents.

When it comes to training exercises or when it comes to I brought this up during the pre-show, but like chatbots for for companies that if you're trying to troubleshoot a problem, you could have a digital agent that is operating off of some knowledge base. I think it's the same thing here.

It's the same concept where you are talking with, ultimately what this comes down to is talking with ai. In a digital space to accomplish some goal in the form of a video game. That's entertainment. Enjoyment when it comes to, using a software and trying to troubleshoot that software and talking with a digital agent, then that's the goal, is to fix your problem, whatever you're dealing with there.

But then there's also the whole application of healthcare, which Alex brought up earlier in the pre-show. There's all these different applications, but the, at the core of it, you're trying to prime this AI agent with a goal and having a naturalistic conversation between you, yourself and this digital agent to the point where it is either more fluid than you might have in.

Another setting. So like in the case of an npc, a video game, npc, it might be stiff because you have this decision tree of like how the NPC reacts to various inputs. Same thing might happen with a chatbot, right? You might have some sort of predetermined responses, but this then changes that interaction.

And so I'm excited about it. I think there's all these, like I said we've done many episodes on ai, the dangers of it. There's I was, I alluding to it earlier, but what happens when you start having interactions with NPCs or digital agents that are then more enjoyable than interacting with humans is that, is are we going to prefer that type of interaction?

And then, Because of the nature of the technology that these things are built off, which is large language models, which is how we as humans communicate with each other. Which is why if you ask any of the other large language models that are out there and you say, please and thank you, you're more likely to get better responses because that's what humans do.

Yeah. We say, would you please do this and thank you, and you're likely to get better responses. And so what happens when we start crafting these surreal, more enjoyable experiences and we start to prefer those and then these large language models, do they then reest that information and reinterpret?

I'm getting off on some wild tangents here, but this is just where my mind is at because I think criticisms aside, this is really cool. It can make things really accessible to a lot of different people especially those who encounter some of these. Let's say social anxieties around saying something stupid to somebody or saying the wrong thing or, I'm sure all of us have ruminated over something that we said in a, in the conversation.

Go, why'd you do that? Why did you do that? And with this, it just doesn't matter because whatever is one little thing, and if it's really important to the story or whatever, I'm sure you could go into the history and say, okay, I never said that. Delete it. And that's another type of interesting piece with all this that is just got my mind racing.

I've been going on though, Barry, where are your thoughts with all 

[00:16:14] Barry Kirby: this? So something you said earlier, which sparked a bit of a brain thing. So I'm gonna take us on a different tangent, just, okay, let's do it. This might go a little way or a long way, but I don't know. 

[00:16:25] Nick Roome: Get me off this ledge. 

[00:16:26] Barry Kirby: At the at the economics conference this year, we had a presentation that was talking about the differences between HF and ux.

Because we'd see on this show and we say, I normally say quite a lot, hf, ux, all part of the same thing, all part of the same family. But actually this presentation that was given by Mark Ton and supported by Amanda Wooderson, was highlighted a whole bunch of stuff that UXs do that HF doesn't, and that HF doesn't ux.

So it had the whole Venn diagram. And the reason I'm bringing this up is because you said earlier about HF and UX being applied to this type of thing and looking at things like edge cases and making it really engaging, going down the story and all this sort of stuff. And I was wondering, is this a really good example of where HF and UX actually do different things?

Because this is all driven to engage the user, engage the player at the end, and therefore make it delightful. I quite often talk about the difference between HF and UX being almost like a vase. And the top of the vase is UXs doing things to delight your customer, to delight the person, the user at the end of it and HF being at the bottom, making sure it's done all properly.

It's done, it's done to the right safety standards, it's done to the right standards. And it's almost a it's a bottom up approach. But you will always end up with a product in the middle. I see from the way you described it earlier as UX is, would be brilliant here at making sure that the road of the MPC absolutely contributes to the delight of the player the delight of the person who the user at the end of it.

The human factors aspect of it is making sure that the edge cases are looked after, that we don't fall off into some sort of uncanny valley on some sort of dangerous, it's almost looking at the hand, is it handrails? They called them. Making sure that, that the handrails are there and they're appropriate and we can onboard and off board people properly.

But they are almost they're two very different aspects of the same thing. And I think, and I'd be interested in your thoughts is that re an example of where HF and UX is different but contribute to the same product? 

[00:18:33] Nick Roome: I don't think so. I think there's like from my perspective, there's the Venn diagram exists with this too.

I don't think the UXers would shun the guardrails at all. I think in fact they would embrace those guardrails, but I think they would build upon the research of human factors practitioners. Exactly. Yes. So they are taking that, those findings. And applying them in a real world context of, secondary research is a huge thing that, that UXers do of that they go out and do research on what exists today and the fundamentals of it.

Or at least they should. At least that's my opinion of it. And so when they do that, they are looking at what. Guardrails have HIPA factors practitioners come up with. And I think because they don't need to do the foundational research of setting up those guardrails, then they can focus on things like perhaps and maybe this is what you're getting at, right?

Where they don't necessarily have to do that research, but that's where the research comes in and that's where they apply it. And then because it's applied, now they can focus on getting us over that second half of making it enjoyable, entertaining. And I think it, I think this would largely depend on the industry.

[00:19:43] Barry Kirby: I mean if we, that's, I guess that's almost what I mean if taking this article in, its true, in its base, meaning at the moment where we're talking about gaming we talking about all that. Bits, you'd still, and I guess you did, while shunning my my definition, you then went on and embraced it. Clearly I think I'm right.

But the, but no, there is definitely some, there there's definitely something there about the about making sure that what, that the engagement of it is an enjoyable and the right thing to do. And, but also making sure it's, it is built on Cause if the, you say that, yes, the Hfp would've got it got it, built that foundation already.

But in something like this, we're not going to have done that yet. Who's built these handrails at the moment? It's brand new. I would go, I would lay down a gauntlet and say that the guardrails haven't truly been thought of from a, from an HR perspective. I bet they've put in some some, I won't call them half-baked, but I bet they're not really.

Thoroughly thought through yet. And that's, I would see as being a really good human factors role right there for then the UX people to pick up and play with and run with. Would be I think a really cool not delineation between the two, but showing where the two come together.

[00:20:55] Nick Roome: Yeah. And I think I, like I said, I think this largely depends on the industry, right? Because the role of a video game is. To create enjoyment. And so that is gonna have a very different fundamental purpose than any other app or tool that is meant to do a task for you. And so I think it, it differs in this context where that is maybe enumerated more significantly than in other contexts where, they are perhaps doing that Venn diagram is more of an overlap.

But you're right. I fine. Yes. I will concede to that point. I wanna, yes, sorry, I wanna bring up some other stuff here. There's Ghost Writer, which is another technology that Ubisoft is using, and they're using this in some of their some of their tech coming out here.

And it's basically The same thing, but for Ubisoft's. And I basically, I'm bringing this up because I want to comment on the ability to learn from these things. And so the and this kind of gets at what I was saying earlier about making these in enjoyable interactions and maybe perhaps more enjoyable than human interactions.

And if these systems can learn what types of things the human is approaching these contexts of what these digital agents is then they can learn how to better respond to them. For example in a training scenario, I come up to you an N P C Barry, and you say to me, good day, sir. Would you like to do A, B, or C?

And I say, I don't wanna do any of those things. I would like to do. D and then you have another piece of data that says, my goal when I come to you is to do D, and then you as an NPC can learn over time that you might start offering B, C, and D instead of A, B, and C, because no one ever says A. And so then you can start to react to how you convey that information.

And I think this is where it gets really interesting is because if this technology is cloud-based, if it's based on large language models, you can update the technology that's behind the scenes and these things get better over time. And so I'm thinking that if you have a scenario where, let's say you play a game in 2023, and then you have this technology with AI agents that you interact with and they say one thing and then you decide to replay the game in 20, 28, 5 years later.

Is it gonna be an entirely different game because they're going to be reacting to you in different ways. It's, are the playthroughs gonna be different because they have a new data set? Is it going to be fundamentally a different game or, because some of the main story beats are the same, it's going to be it, it's gonna be the same game, just with different dialogue.

And will that change your perception of it over time? And do, how does that work for training purposes, how do you evaluate the training efficacy and efficiency of different data sets across each other if they're learning within themselves? Go ahead. I cut you off. So there, 

[00:24:03] Barry Kirby: there would be, I guess from the gaming perspective, it'd be interesting to see whether you allowed the game to evolve in the way that you describe, which, from a gamer's perspective.

From a user's perspective. That sounds great. Cause that means that the game would always, it'd always be fresh, wouldn't it? It'll always be a new. Something to go and engage with all of the time. But then does that mean that the games manufacturers are gonna lose revenue? Cause they're not gonna be able to send out like version 2, 3, 4, 5, 6, 7, all that sort of stuff.

So what they might end up doing is when, so get getting the fee, so using the MPC feedback, say I'm always getting asked for option D, and that gets fed back to the ga, to the to the manufacturers. And that gives them an automated, effectively backlog of what the next game needs, next version of the game needs to look like and what it needs to have in it to then be almost guaranteed success.

Which would, so I, so it's all, it's evolutionary, but almost bringing the capitalist element back into it. 

[00:25:03] Nick Roome: Yeah. It does make sense for a live service game to have that, right? Like where, the pay, the monthly subscription, those types of games. It's more interesting, I think when you think about like the games that are static, if they update it, because the whole reason I bring this up is because the Nvidia stuff is done in the cloud. And so that way it doesn't do processing local to the machine. Can it do both? It can do both, yes. Okay. So if it's done local to the machine, then it's, it could potentially learn over time if you code that in.

But the cloud is gonna be where it's really interesting because Yeah. That's where it keeps getting fed data, keeps getting updated with the next language model, could switch language models entirely. Or it could use its own, home brew solution to it all. Which is just fascinating. It's all fascinating because then it, that would largely dictate, like you could, games would have infinite replayability, and if it's a service game, then that capitalism comes in and people would just keep playing, just play again.

That's all interesting. I think when you look at some of these this sophisticated technology, right? I think there's I can sense a desire to potentially do this for all NPCs, is that gonna be the right approach? In a training scenario it might be because then you have all these different elements that could play together.

Do other digital agents react to something that another digital agent has said? Is that information known? Is there a radius on that then, an NPC over there doesn't know what this NPC said and are they siloed? And that whole like technology piece is really interesting to me because then that mimics a real life scenario a little bit more.

But then talking about these like crafted experiences would it make more sense in like a video game for nothing to be siloed, but pretend siloed, so that way. The next NPC that you talked to isn't relaying the same information that the first one did Yeah, that's, yeah. So in a training scenario, it makes sense to talk to multiple people, get their perspectives on it.

I'm thinking about like a training for an eyewitness of some tragedy, right? You go up and you talk to these people about what happened. You try to get the story and this NPC says one thing, and then this other NPC says something similar, but slightly different, but they don't know what the other NPC has said.

And so it's up to the person's job to interpret everything. I just think about the connected piece behind the scenes of like the different types of instances that you would need them to be connected versus disconnected. And that's also, it would contribute largely to what the end user's goal is in those cases.

[00:27:57] Barry Kirby: Yeah, I think there's a whole bunch of stuff around this that I think goes, gets deeper and deeper. So if you've got, if you want to create a new training scenario, for example, then the ability to have some basic MPCs with decent AI behavior. They can walk, talk, interact, and do them sort of things.

That means you could fill up a training scenario really quickly with say everything from like a crowded town center all the way through to to a village or something like that. But then you could have almost specialties do you put, focus your effort on having like maybe 30 to a hundred standard MPC AI characters.

But then you maybe have one or two specialist ones that are key and you then go in, you spend a lot of effort developing them to be Super good because actually they could maybe lead some of the others as well. There's maybe some hierarchical capability that you could have at this point, which would then get interesting and how they would organize and make that sort of thing work.

Would you also then use this technology for so there is now work going on in drones and the use of, so if you've got man and unmanned aircraft flying alongside each other, you've got unmanned aircraft beside you, would you use, could you use the same AI engagement for you to engage with the with the drone, for example.

So using the same NPC sort of technology but treating it as going that way and having, rather than commanding it, having conversations with it about what it is you would want to do in the same way that you would a a real, a, a real se second plane type, for 

[00:29:32] Nick Roome: example. Yeah. One thing that we're kinda skirting here is decision making when it comes to like chatbots and so that's a whole sort of can of worms that we haven't opened yet.

I don't know if we want to open it this episode that's probably 

[00:29:45] Barry Kirby: quite deliberately cuz that's that's huge. But in fact, by the time you get to decision making, it's not an MPC anymore, is it? It's not really a non-player character in the way that we are loosely defining it. Or is it if you work, when your work, if you are, make, if you, I mean in the thing I just described around having hierarchies of MPCs is the specialist skills of the high MPCs, actually they have an, a limited amount of decision making in order to command and control other MPCs or influence other command.

Okay. Yeah. This episode suddenly got twice as long. Yep. 

[00:30:23] Nick Roome: So I wanna bring up some social thoughts here. Alex has written into our show notes here several points that I wanna bring up. I wanna make sure we address them because they're cool points. So the first thing is around these sort of safeguards that we've been bringing up and what Alex writes, what would be the potential safeguards in place?

What would be prioritized? Abusing NPCs is not new or unique. Mods can be added to almost all systems. Think Thomas the Tank, engine Dragons and Skyrim. Does this give more player agency to modify or less would limitations be in place for how frequently a player can utilize voice to NPC options? So I think that that point alone is really interesting about like the Moding community.

Could you put in a different large language model and how does that all go against the end user intent there? And I think there's a lot of room for abuse there. If you jailbreak an NPC to say the things that the developers or the person making the thing didn't intend for you to do, can you say a series of chats to it in a training scenario and get it to be broken?

That's 

[00:31:32] Barry Kirby: oh, the fun you could have. It's you could, yeah. Yeah. So Alex's point about safeguards is really interesting because I'm think on it a bit more. How would you, because sometimes safeguards need to be done in a way that nudges you away from bad behavior. If you suddenly come across with big flashing lights and no, you shall not do this, and it's out of character, it's out of thing.

It destroys your immersion. And therefore it destroys either the level of tra whatever. If you're doing training or you're playing a game, it destroys really that, that immersive relationship you built with that game. So there's gotta be a fair bit of skill there in order for.

The MPC to recognize it's come up against a say an, an edge case where it needs to redirect things, but for that redirection to be done in such a way that it is within the character of that MPC that has been developed. And, if each MPC is, has got a different type of character, then that's quite a lot of work.

But there's again I guess with anything like we do in this domain, in, in particularly in the human factors domain, we spend more time in the edge cases. Because if you get them right, then everything else is just greatly, it just flows really well. But the ability for that to be done right in this is going to be very difficult.

[00:32:45] Nick Roome: I think so we're buttoning up against time here, but I do want to just open some can of worms that we can talk about in the post show. And I really wanna talk about the different domains here because we've been focusing on gaming, we've been focusing on training, but there's different other there's other areas in which this could be applied to.

So I mentioned the chatbot technology, customer service, that type of thing. That can handle some more complex customer interactions that you might get in like a case where, I don't know there's a bunch of different variables. You might see something more like that. I can imagine you brought up like the co-pilot situation with a drone.

So when you have these simulations, we talked about it in the context of maybe like an emergency response or a a wi witness program, but I think there's There's another piece to this where you have the the flight and space simulations where perhaps you're on a deep space mission.

You want to interact with how, somebody might react in a certain way. Defense similar to aerospace, you have a bunch of military simulations. Enemy behaviors, all those different things. Let's see here. I'm trying to think. Education, AI and gaming could spill over into education. We had a whole episode about AI teachers see that conversation for more on that.

And we talked about a little bit about healthcare before in the pre-show, so there's a lot of different other applications that we can look at. And I just wanted to not limit us with that. 

[00:34:11] Barry Kirby: That's fair. No, I think there's, I think almost anywhere that we, you can imagine a computer a chat bot being involved, then clearly this MPC technology will have a, an ability to enrich I guess each sector.

So having gone from the beginning of this where we were like is this gonna be a long and interesting topic, clearly, The usage of this could be quite good. I think there's gonna be there is a broad range of application if we think outside the box a bit more. But there is a lot of human factors and UX influence that is needed to make sure that it's safe, but also really effective.

Yeah, 

[00:34:48] Nick Roome: It's, Hey, did you just do a safe and effective plug? 

[00:34:51] Barry Kirby: Oh, inadvertently did. That was clever. 

[00:34:54] Nick Roome: That was clever. That was good. Check out. Safe and effective. I think that's gonna be it for this this story here. We'll wrap up here. We'll talk a little bit more about it in the post show. Thank you to our patrons this week and everyone for selecting our topic.

And thank you to our friends over at In Gadget for the news story. If you wanna follow along, we do post links to all the original articles on our weekly roundups in our blog. You can also join us on Discord for more discussion on these stories and much more. We're gonna take a quick break. We'll be back to see what's going on in the Human Factors community right after this.

Are you tired of boring lectures and textbooks on human factors and ux? Grab your headphones and get ready for a wild ride with the Human Factors Minute Podcast. Each minute is like a mini crash course, packed with valuable insights and information on various organizations, conferences, usability methods, theories, models, certifications, tools, and much more.

We'll take you on a journey through the fascinating world of human factors, from the ancient history to the latest trends in developments. Listen in as we explore the field and discover new ways to enhance the user experience. From the Think Aloud protocol to the critical incident technique. Focus groups' iterative design will make sure that you're the smartest person in the room.

Tune in on the 10th, the 20th, and the last day of every month for a new and interesting tidbit related to human factors. Don't miss out on the Human Factors Minute Podcast. Your ultimate source for all things human factors. Human Factors Cast brings you the best in human factors. News interviews, conference coverage, and overall fund conversations into each and every episode we produce.

But we can't do it without you. The Human Factors Cast Network is 100% listeners supported. All the funds that go into running the show come from our listeners. Our patrons are our priority and we wanna ensure we're giving back to you for supporting us. Pledges start at just $1 per month and include rewards, like access to our monthly q and as with the hosts.

Personalized professional reviews and access to the full library of Human Factors Minute. A weekly podcast where the host breakdown, unique, obscure, and interesting human factors topics in just one minute. Patreon rewards are always evolving, so stop by patreon.com/human factors cast to see what support level may be right for you.

Thank you, and remember, it depends.

That's right. Huge. Thank you as always. To our patrons, we especially wanna thank our human factors, cast all access patrons, Michelle Tripp and Neil Ganey. Patrons like you truly help the show. Keep going. And I guess we're gonna promote a merch store tonight so we have that. Hey there listeners, do you wanna be the coolest kid on the block?

Do you wanna show off your support for our amazing podcast while also sporting an outfit that would make your crush blush? Do I have news for you? Did you know we have a merch store? Yeah, that's right. We're that cool? If you wanna be part of the in crowd head, head over to our merch store where you can find some of the most stylish designs.

Including our coveted it pin shirts. I'm actually wearing a faded version of that now. I like it that much. Perfect for when people ask you questions you can't answer. We also have merchandise with our show logo on it, so you too can look just like a human factor, celebrity podcaster.

If that's not enough. We got plenty of other cool designs based on human factors culture. So why not support the show? Look good doing it. Get yourself some human factors. Cast merch today and show the world how hip and with it you are. Don't be left out in the cold. Be a part of the Cool Kids Club.

Alright, there's that. I don't know. Are those a waste of time? I don't know. We have 'em. 

[00:38:32] Barry Kirby: They, they do things for me. You have no idea. They're 

[00:38:35] Nick Roome: brilliant. Okay, great. We'll, we have a merch store. Go check it out. All.

It came from.

Yeah. Switching gears to something that is less embarrassing than that. Let's talk about, it came from, this is where we search all over the internet to bring you topics the community is talking about. If you find any of these answers useful, give us a, like, wherever you're watching or listening to help other people find this stuff.

Likes are as dumb that they work, but they work. Alright, let's talk about this first one here. This is by Luella on TikTok. Yeah, we got a TikTok one tonight. Check that out. Difference between H F E and human systems engineering. What about human systems engineering? What is the difference between human factors engineering and human systems engineering?

Barry. What are your thoughts on this for 

[00:39:16] Barry Kirby: me? Potato. Potato I largely think that they're both roughly the same thing and to put it into context, and I think it largely ma it could be almost a UK U US thing as well, to a certain extent that there'll be slightly different definitions such as, when we talk about hfi and hsi, so human factors integration, human systems integration.

We're talking about the same thing, so I think they're largely the same, but I'd be interested in your thoughts, Nick. Do you think they're different or 

[00:39:45] Nick Roome: I've heard 'em used interchangeably. And I think that's true for a lot of the terms that we say here. I think if you were to break apart some of the differences at like the.

The very specific levels. Human factors engineering might be more broad. That can encompass things like products, services, systems, et cetera. Where human systems engineering might be more focused on systems like that is legitimately the only difference that I can think about. When you start to break down those two terms, they're used pretty interchangeably.

And just a side note, when I saw your notes here, I read potato. Potato and 

[00:40:27] Barry Kirby: that's the point. 

[00:40:29] Nick Roome: Let's okay. Yep. That's the point. All right, let's get into this next one here. What do you or have you done in human factors? This is by cloud kill 37 from the human factors subreddit. As someone interested in human factors, what's a typical day for a human factors engineer?

And what are some common tasks or projects that they work on? How do they collaborate with other professionals in their organization? Barry. Typical 

[00:40:53] Barry Kirby: day. What is that? So I guess in large handfuls it's things like early human facts analysis. It's things like doing task analysis to find out what's going on.

User engagement inter interface design type work. There's there's, we don't tend to have typical days and it's one of the reasons I love the job Yeah. Is yes, in large handfuls you end up doing some of the same things. So I will do a task analysis on a variety of different jobs. I do throwing out a task analysis out there as a basic, for me, it's the basic building block of almost every project we do.

Because otherwise, how do you know where, how are you gonna change it? But that doesn't, that's not everybody's cup of tea. Not everybody does. Does it the same way. And then you will generally go through some sort of, design developmenty, researchy type phase where you'll find out actually what it is you're meant to do and then design something to go and meet them needs.

And then you'll do some assessment on it, somewhat in iteratively. And then at some point, hopefully you might turn and give it to the customer and say, there you go and run away and go and start the next project. But you'll be probably be doing two or three different projects at once if you've got that sort of stuff on the go.

So yeah, that I, and then throwing a bit of project management there just for fun and and bits and bobs like that. So yeah, I don't have a typical day. It just doesn't exist. But in large handfuls, I guess that's kinda what I do. 

[00:42:08] Nick Roome: What about you, Nick? Yeah, there is no, no day is the same. There are elements that are similar and your goal, your overarching goal is to figure out how people do things and to solve some of the problems that they're experiencing.

I would say that Embodies what we do. And you do that through various methods. And the method that you used yesterday might not be the same method today. Some common things that you get like maybe every day, checking emails, having conversations, and just making decisions.

And I would say those are the three common elements that you experience every day on the job. But it's going to be it's going to be different depending on the phase that you are in. It's going to be different depending on the project that you're working on. It's gonna be different based on what types of things you're end users are experiencing.

So I think ultimately no day is the same. How do you collaborate with other professionals on in, within the organization? Communicate. 

[00:43:04] Barry Kirby: That's an interesting one, isn't it? Cause it's either you talk very nicely to them and or you start throwing things over walls and send really tally worded emails depending on your relationship with them at any one time, given on any project and how helpful you are or being towards each other.

Yeah. There is one of the cool things about being human practi practitioners in this is you will collaborate with other specialists either HF professionals or why, because I, we've said it before, but HF does tend to be the glue that holds projects together. Therefore we do get the opportunity to go and talk to every, a lot of other disciplines will be more siloed than 

[00:43:39] Nick Roome: we are.

Yeah. It's funny, I used to work in a building full of human factors, people and I'll tell you that story offline. All right, let's get into the last one here. Thoughts on voice agents and voice user interfaces. This is by VE NT on the UX Research subreddit. What do you think about voice assistance like Siri and Alexa?

Do you use them often and ask creative questions? Is there output relevant or just for fun? I'm working on a thesis project about how human behavior affects communication with voice assistance. I need some perspectives on it. Personally. I find it harder to talk and think than type and think. So Barry I brought this one in for some reasons that will become apparent in just a minute, but they have ties to what we talked about during the show tonight.

So what are your thoughts on voice assistance? 

[00:44:28] Barry Kirby: I think they're a nice novelty. From my own perspective, I think they're a nice novelty, but I don't use them anywhere near to the presumably the extent that what that you could do. I keep forgetting that they're there shorter, switching the lights on and off and maybe, we setting alarms for when you're cooking and and we got it into, integrated into our doorbell system and stuff like that.

And for being able to play music in, through speakers and stuff I don't engage with them in a creative manner. I don't create even though I've got some of the apps downloaded to them to do different bits because I find them difficult because you don't know what you can and can't ask them because you've got no structure around it.

It's basically the equivalent of just the the blinking dot on a screen. And you can ask them loads of stuff. And some of it they'll know about, some of it they won't. And there's just that, there's also less of a, an ease to explore that. When chat g p t first landed and we spent time oh, what can this do?

And then you go back and copy your, the, you, you write a prompt out and then you go and copy that again and fiddle with it in a certain way and then deploy it again. But you can't really do that in voice in the same way. And you want to be able to think about stuff and then make it happen, which I don't, again I don't find Siri and Alexa the ability to engage with it in the same way.

I just don't think it works for me. So I think it's got potential there, but it's not something I'd want to play with, I don't think. Nick, what about you? 

[00:45:58] Nick Roome: I used to use them all the time because of the novelty, and we made a big move a couple years ago, and in that move I packed everything away and haven't pulled it out since.

And I'm thinking that along the same lines of the conversations that we had tonight, there's some interesting pieces that you can pull from this that would make these things infinitely more useful. What if instead of having, a awake word, it was listening to everything that you said And yeah, sure there's some privacy concerns, but really everything's listening all the time.

So what if instead, here's some examples of how. Something might actually help, where if you imagine you wrote a prompt behind the scenes that says listen to the user's words, then interpret them and rephrase them as commands to a voice assistant, right? And you have that middle piece of technology there that if you were to say something along the lines of, man, I'm hungry, and the voice assistant comes in and is actually like an assistant and says something would you like me to order dinner for you?

So you're not telling it Siri or Alexa order dinner, you're saying I'm hungry. And it is interpreting the action that you want from what you've said to suggest some potential options. I think this is where that technology is gonna be. Much better over time is when we start to have those interactions.

You can also, another imagine another scenario where I don't know, like something keeps happening and you say, I can't believe I keep missing this thing. And the voice assistant will come in and say, got it. I'll set a reminder for you for this in the future so you don't forget it. And that would be extremely useful, right?

Like how many times do you say something to yourself or to your significant other or to your children throughout the day that you're like, I wish I had done this differently or something. And then you have that voice assistant coming in. You could have that voice assistant coming in and then patching that with actually truly useful things where, oh, it's dark in here, turns on the lights, what, like that's really cool. Yes. Yeah. That's the future. And that's why I brought this up is because when you have that additional layer of large language models that can interpret what you're saying and then relay it as a command, that'll be cool. 

[00:48:12] Barry Kirby: It will that'd be quite 

[00:48:14] Nick Roome: exciting.

That would be. All right. Any other thoughts on that, Barry? Are we good for one more thing? No, 

[00:48:19] Barry Kirby: I think we I guess the last thing is with that is it's almost also having them in the spaces where they can listen. And how do you differentiate between the places where you've got them where you haven't.

So given what I do on my day job, there's a lot of places I can't have them. And you walk, you, if I walk into my office and say, why is it so dark in here? It's gonna remain dark because I'm not allowed a speaker in that. And you've always gotta remember where you can and can't engage with them.

[00:48:47] Nick Roome: Yeah, that, that would be an interesting problem. You become so reliant on, on voicing your thoughts that when they don't manifest into actions, 

[00:48:56] Barry Kirby: it's dark in here. It's dark in here. 

[00:48:59] Nick Roome: It's really dark in here

[00:49:01] Barry Kirby: anyway. 

[00:49:01] Nick Roome: Yes. Oh, all let's get into this last part of the show. It's one more thing. It needs no introduction, Barry. It's been three weeks. What has been going on with you? 

[00:49:11] Barry Kirby: I've still got my sh my neck and shoulder pain that I've been moaning about for months now. But actually what I've, one of the other reasons I've been away last week was I went to a state of FA in a family bungalow.

So I went to say my grandparents bungalow for the week, and the idea was that I was gonna go up and have a working holiday week. So a bit of a change in environment just to give me a bit of a different feel to what I was doing. And so I'd work in those sort of mornings and early afternoons and then go and spend time with the family lated afternoons, evenings, that type of thing.

So get my work stuff done early doors while they're still asleep. And then get get some chill out time in a different environment. It worked, trying to do both at once, but not. I should have really, and what I need to do is to be able to take two weeks off probably during, in the next month or next couple of months, and actually take some solid time out.

Do not take the laptop, don't take the the notebooks and and just let the team get on with this stuff. So as a, as an experiment about working from somewhere else, it was quite nice. It was quite novel. It did allow me to do some, a bit more different sort of free thinking. So that was quite cool. But it certainly it wasn't a holiday.

It was more relaxed. I was more chilled. But it still wasn't a replacement for a good holiday. 

[00:50:26] Nick Roome: Yeah, I feel that I've done that before too where you just you work while you're trying to get out and it just doesn't yeah, so I have so many different things that I could talk about over the last couple weeks.

But I think I'm gonna talk about this very specific thing that I had happened. So the algorithm somehow knows that I have a Dyson vacuum and it somehow knows that my Dyson vacuum had a dead battery and it somehow knows that I have Milwaukee tools. Whoa, okay. This came together in a perfect confluence because I saw a TikTok video of some guy saying, okay, you might have one of these Dyson vacuums.

I've seen that, and the battery may have died on it. They sell attachments for it so that you can put your tool your tool batteries into it. And I did that. And it's brought back life into this. Very expensive handheld vacuum that is good for what it is. That has been dead and I've just been holding onto it and oh God, it's been dead for a year and a half.

And it just the battery drains like in three seconds. But now I can swap out attachments on the battery and do charge it and actually just replace the battery so I can do larger vacuum jobs. It just, it's it's something that I didn't know I needed and then the algorithm found me and it's made my life better in some ways.

So that's one thing. But then the algorithm just this week was just nothing. But Sub submarine and stuff like that. So I just it hits in two different ways and it's it's just interesting how it works. And that's just an observation. That's all I'll have to say about that. And that's it for today, everyone.

If you like the conversation around ai, maybe I'll encourage you to go listen to episode 2 75. Ain't no stopping us Now where we talk about the dangers of AI comment. Wherever you're listening with what you think of the story this week for more in-depth discussion, you can always join us on our Discord community.

Visit our official website signing up for our newsletter. Stay up to date with all the latest human factors news. If you like your show, you wanna support us in some way, shape, or form, there's a couple things you can do. One, wherever you're at right now, you can stop what you do and leave us a five star review.

That helps us out a lot. You can tell your friends about us. That helps the show grow by a lot. You have no idea how much that helps. And three, if you have the financial means to do you wanna support us and keep the show going. You can consider supporting us on Patreon. Just a buck gets you in the door, access to a bunch of cool stuff.

As always, links to all of our socials and our website are in the description of this episode. Barry, where can our listeners go and find you if they wanna talk about NPC Barry? 

[00:53:01] Barry Kirby: Hopefully the NPC Barry will be everywhere and be ubiquitous. But anyway, if you gonna go and talk to me about that, then I'm of social media, particularly Twitter and Baso Scott k or you can find me engaging with other human practice professionals and like-minded individuals for one-to-one interviews on 1202, the Human Access Podcast, which is at two podcast com.

[00:53:19] Nick Roome: As for me, I've been your host, Nick Rome. You can find me on our discord and across social media at Nick underscore Rome. Thanks again for tuning into Human Factors Cast. Until next time, it 

[00:53:29] Barry Kirby: depends. 

 

Barry KirbyProfile Photo

Barry Kirby

Managing Director

A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.