Human Factors Minute is now available on Spotify: Check it out here!
Aug. 5, 2022

E253 - User Perspective on Categorizing Chatbots and Voice Assistants

This week on the show, we talk about how to categorize conversational interfaces. We also answer some questions from the community about human factors engineering being considered design, how to run concept testing that isn’t worthless, and we cover some insights from the UXPA Salary report.


Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on Human Factors in Iarnród Éireann - An interview with Nora Balfe:

 

Check out our full coverage from this year’s Human Factors in Health Care Symposium:

 

News:

 

It Came From:

Let us know what you want to hear about next week!

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

Transcript

 

Welcome to Human Factors cast your weekly podcast for Human Factors Psychology and Design.

 

 

Hi, everyone. It's episode 253. We're recording this live, online, august 4. How was it August already? 2022. This is human factors. Cast I'm your host, Nick Rome. I'm joined today by Mr. Barry Kirby. Hey there. Good evening. What an intro. We got a great show for you tonight. We're going to be talking about how to categorize conversational interfaces riveting. We also answer some questions from the community about human factors engineering being considered, design, how to run concept testing that isn't worthless, and we'll cover some insights from the U XPA salary report. But first, got a couple of programming notes for you all. I don't know if you all noticed, but we did last week post our interview, my interview with Joe Keebler, who is the chair of the International Symposium on Human Factors in Ergonomics and Health Care. In addition to that coverage, we also wrapped up in a nice little bow, a couple of extra things for you. So we also did a Trend Alert on evaluating and improving complex medical device systems, as well as four themes that are emerging in human factors in healthcare. From the symposium this year, I've also posted a roundup on our blog. All this stuff is available to you. It'll be in the show notes of this episode. Huge thank you to our Digital Media Lab for going out, covering the event. Huge thank you to Joe Keebler for sitting down with me, taking the time, and also thank you to HFPS for providing access. Barry, you've been busy over at twelve two. You want to talk about the latest over there? Yeah, absolutely. So over in twelve two, we've been looking back at the rail sector. So I've had an interview with Nora Valve, who is the Human Factors lead in Irish Rail, and Irish Rail has its own share of human Factors challenges. And really Nora gave

 

 

us a real insight into not only the breadth of what she gets up to, but really the depth. And she talks a lot about the real nitpicky things that really makes her day role of what she really deals with. So really interesting and really good. I encourage everybody to go and listen to that. And then we've got some really interesting topics coming up as well to do with actually publishing in the HF domain, which is going to be fantastic, as well as a whole range of other things coming up. So I've been busy, I've been doing some recording, and I've actually got about a month and a half ahead in terms of content, which is rare for me because normally I'm a week by week type of guy. So, yeah, come on down and listen to what we got. All right, thank you for that overview, Barry. Let's go ahead and get into the news.

 

 

That's right. This is the part of the show all about human factors news. Barry, what's our story this week? So our story this week is all about how we're going to categorize conversational interfaces. The main drive of this article is to develop the Conversational Interface Taxonomy from an interaction perspective. We've been using conversations with technology on an ever increasing basis, from chat bots to smart speakers. But have you ever stopped to think about how the different methods of how that conversation is constructed and how that defines your relationship with the technology that you're using? As in every new industry, there's a lot of discussion in the conversation ecosystem about the right terms we should be using to talk about that work. That discussion will probably keep evolving for ages about when to use assistant versus agents, and probably with very little progress, it will go back and forth. However, establishing categories that help us differentiate interfaces is not that problematic. As part of the Introduction to Conversational Interface Design course, the author has documented the taxonomy that they use to categorize and differentiate the different interfaces we find in the conversational world. So this approach is based on the things that have an impact on the user agent interaction. You also add some examples to make sure that we are all on the same page. So Nick, what are your thoughts on how we converse? Are you an agent or am I your assistant? We're both agents. We're both assistants. How about that? We'll see how the audio does tonight. Apparently there's some live audio issues. But for me, I think this story is interesting because it takes a different approach. It's looking at it from the user perspective in terms of the way in which we are interacting with these digital agents. And we've done a lot of skirting around digital agents over the last couple of episodes. There's been some talk about robots and old folks homes, and we're going to bring up that story. It's a good tie over. I think this is some good foundational ways to think about it. Barry, what are your thoughts? Yes, I absolutely agree. I think this is one of them stories. We sort of have them occasionally. Don't be aware. It's almost a back to basics thing. And with the evolution of smart speakers, chat bots, all that type of thing, I think we now largely take for granted just the sort of conversations, the interactions that we have with them. And as they naturally progress to be more naturalistic, then we sort of just assume that they understand everything that we're talking about. Whereas actually we do need to get back to basics a bit more. Having been playing around with chat bots and things for a while, you kind of forget that a lot of them are based on actually very simplistic decision trees and things like that. And with the implementation of AI and more going to artificial general intelligence and things, this is going to be some fundamental stuff to help us work with. But what I do like is actually the way you described it as well, is that this feels like it's one of the first that's been put in front of the users, has been put in the front of the conversation. It's actually been a user focused development which we're all fans of here in the human factors world. Yes, we all like it when the human is at the center of things. So I think a good way to go about this might be to talk about the category that this author at least talks about here and then talk about the human factors application to that category. So maybe we can just trade off here. The first one here is Communicative Freedom, right? And this is basically the capability for users to use their own words when interacting with an agent. And so some examples of this might be like natural language processing. Like. I don't know. Speaking to a digital agent when you say activate the wake word. I don't want to say it to activate anybody's devices. But there are certain Amazon devices and Google devices that are in everybody's homes that if you say certain key terms. They wake up and then you can ask them things and it will try to process the way that you are asking it through your natural language. There's also that approach, but from a text based approach where you're typing in I want to do X, right? And you are typing that in and it is parsing your sentence and that is natural language processing. The other half of that piece is kind of the bit with choosing a response to a prompt that they give you and this is more like click a button, what would you like to do? And the options are A, B and C and you press one of those options, it then populates the conversation with that option. It understands it because it is a direct input that is exactly what it was looking for and therefore it could move forward with that conversation

 

 

just to jump in on that one. It really shows us, we naturally think that we want to go down that natural language processing route because we sit there even if you're using some sort of wake word to get its attention effectively, we don't want to necessarily have to train ourselves on how we interact with this technology. And this sort of falls into that human factors paradigm of training of when you sort there and say if you have to teach people how to use this type of technology, then you're almost getting away from the effectiveness of the technology. So when we've talked about the different people who might use this, so the elderly, for example, in the current domain, we want this sort of thing to be able to sit out there. They just talk with the device which links back to one of the previous episodes that we've done. We don't want them to have to worry about how they pass their syntax together and putting their structure in a certain way in order to get a result. So we want to go down that route, but there are elements there where we actually want to have quite a tight control on how that syntax is put together. So if you're delivering commands in, say, a fighter jet or something that has audio controls, you don't necessarily want too much natural language processing. You want to deliver commands and get it to be done in a certain way because you might say other things that you don't want to have an accidental like we nearly just did with saying the wake word. You don't have that going off at the wrong time, do you? Yeah, they must be intentional. And I'll add to that, too. There's benefits to having those prompts for you listed out, right, and I think the biggest thing for me is sort of this recognition versus recall piece of Human Factors 101. You sort of have the recognition of these prompts that are coming through the written language or the button prompts, and then you have the natural language processing is where you have to recall the command in a lot of cases, unless you ask the computer for a command to give you. So that way you know what your options are. And so when you're looking at that recall nature of it, you're also looking at the possibility of not knowing everything that you're able to do from a user perspective versus knowing what you're able to do with the options being finite in the click a button perspective. Anything else on this one, Barry? Yeah, I guess just to highlight the example of where that is really prevalent in everybody's use at the moment. So if you have one of these smart speakers and you put skills onto it, it's quite often you can forget what skills you put onto it because it's not obvious that the speaker that you don't have a prompt, and particularly with some of the more fun skills you can get. So I know for a fact on ours. You've got the ability to do the automatic self destruct like you have on Star Trek on Star Trek craft. And if you give you the right command and a certain code. It'll then go down the 30 seconds. 20 seconds and do a whole explosion thing. Which is really cool. And you'll find a photo on top of the other and all that sort of stuff. Which is really great. But I keep forgetting about it. And so unless you know it's there, then you don't use it. That's a silly frivolous example. But if you've got key functions that you forget you can do just because you're not prompted to do so, then that shows that you do need a bit more alongside that. Yeah, take us through the next one here. So the next one is really look at that type of interaction that you have. So how do you physically interact with the medium on which that's taking place? We've already talked around smart speakers which are largely voice activated so you can have fewer voice interfaces, text based interfaces, so chat, bots, things like that. So when you're typing in the answers and then you can get someone multimodal and use mixtures of both and other bits as well. So it's not only smart speakers that use the voice interfaces now it's very popular when you go and ring. Say some sort of service supplier or something like that. You will go through you'll have a bot at the end of it asking you questions to try and get you to a resolution without actually having to go to a liver instant equally on your text based chat you can either have quite an open chat or it will be a structured approach to try and get you to some sort of resolution. And really with a lot of these, what they're trying to get to is not to have to use live people to come up with a resolution so let's see if they can use what is in known by the system to get there. So I'm not seeing it. I can't think of an example where you use both voice and text at the same time. Brilliantly, because I can't think of one. So there are actually in fact some of these in home smart devices that use those wake words that have screens attached to them as well and a lot of times if you ask it something, it will then respond with what would you like to do? Right? And it will give you those options available as potential prompts. You could still ask for something else but it could show those prompts on the screen, you could interact with it on that area. Basically what we're talking about here is redundancy of information, redundancy of communication when it comes to the user perspective, right? Because you are then getting it in multiple modes. You're either looking at the response and hearing it at the same time and there's less ambiguity about what the computer said in that moment. That agent, that chat, bot, whatever you want to call it, there's less ambiguity because you're able to read what their response is in addition to hearing the response. When you have those paired together, there's much higher accuracy and so I don't know if you have sort of read outs on HUDs in fighter jets, but that might be another application of that. If the computer is reciting something and there's sort of that information available then that would be another example of that multimodal and really what we're talking about here is just the ability to interact that way and so what that looks like is literally just a button on the screen that if they are presenting those response options, that's what it looks like. For me, I think this is one of the basic levels. How are you interacting with it? Because there could be potential other ways that are not listed here that you could interact with something. Take for example, someone who speaks ASL are there gestural based approaches to this type of interaction? Maybe not a lot of them today, but some day it could be a reality. And those there's going to be sort of this whole human factors communication issue of what the ambiguity around some of those signs and the intent behind those signs. Because I know a lot of them can mean different things based on context where they are located in relation to your body, to your face, all those things. And so will a system be eventually we'll get to the point where systems are complex enough to understand those. But that is another type of interface where you have these gesture based responses. And I'd imagine at least that's reading the user's input, I don't know how they would communicate back. They might just provide some text on a screen. There might also be a robot doing that. But then like, what does that mean? Right, go ahead. You could have the robot doing it all just an animated Audi set of hands. In theory, I don't think it should be that difficult to do, but it's how life like you could make it so it looks natural to have that conversation. That could be fascinating to see because if they got that where you normally have something doing, translation of speeches live and things like that,

 

 

that could be almost an automated service. In fact, thankfully nobody will be listening right now. We could just quickly go away patents that we get the design done and nobody worried about it. There you go. You're right. Because I wonder how then you deal a lot with because a lot of the conversations you have now are very much of the sort of transaction. We'll get on to that in a bit. But then how do you deal with different people's colloquialisms and things like that? Because some people from different areas have such local words that how do you deal with the errors alongside that localization dialect? Yeah, that type of thing. Because it's going to need such a large knowledge bank to make that happen. But I guess again, that's just evolution of technology, isn't it? Yeah. I'm going to jump into this next one. Unless you have any other points for sort of the type of interaction, go for it. So let's get into this next category here, which is domain of knowledge. And really this is kind of getting at what the assistant can or cannot do in relation to everything. So to break this down further, you kind of have a specialty that this agent could do and then you also have those ones that are more general. And the difference could be sort of illustrated by if you use a dedicated customer service app for any service. It's going to know finite information about that service. It's going to give you information that is relevant to that service. It's not going to have what's the weather like in my part of the world today. That is more of a generalist type of perspective, right. Scope can be unlimited, although sometimes the information returned is maybe less of a lesser fidelity, let's say, than some agents, like a specialist might be. Right. So you have sort of a specialist who knows every single thing about that topic being customer service for whatever you want to say, right. Customer service for Human Factors. Cast, our customer Service bot, knows every single thing you need to know about the podcast, where to find all our blog posts, where to find all our episodes, what episodes we've done. But if you were to ask another general one that you might have in your home right now, they might pull something incorrectly if you were to ask them about AI or Chat Bots or something like that. Right. And so that's kind of the difference. Barry, you want to jump into any of the Human Factors implications here? Yeah, so there's some interesting bits around understanding the scope of what this conversation is about. So, like I said with the specialist piece that gives you defined boundaries, you know, you're going to talk about that. There's nothing that can be taken in the abstract sense or in theory, there's nothing that can be taken in the abstract sense. It should all be focused around that subject, which would work for, as you say, a whole host of applications. We're even talking about things like artificial mentors or AI mentors and that type of thing. It's all going to be focused around that area. So that would work, but it won't work in every scenario. We've talked on previous episodes about using artificial intelligence to provide companionship to both elderly people, and we've had our AI girlfriends now to provide that general level of chat and discussion, and the AI is involved in their will, take a broad variety of input and be able to recognize the context and have them start conversations. But then I guess from a safety perspective, we've got to be able to define the scope of what the consequences are. So ideally, do you want a generalist discussion agent or assistant to be able to do safety critical applications or safety critical actions based on a general conversation? Or would you want it nice and tightly scoped around a specialist approach? I suggest the latter rather than the former. You want it. But if you're just want a bit of a general conversation, what's the weather like? How am I doing this? When's the bus coming? Etc, etc. That's more generalist because the consequences are not so dramatic if it picks up on the wrong keywords. Yeah. Or other casual conversations like, are you sentient? Do you have feelings? Hinting at another story that we did so yes, you're right, there's sort of applications for both. It really is an it depends situation when you need to use a specialist versus a generalist. I think in a lot of cases we will know which one we're talking to based on the context in which we are operating. However, I think there are some certain circumstances where you might not be aware that you're talking to a specialist versus a generalist. And so there's going to be some sort of, I guess, indication that's needed to the end user that indicates what you are talking to. Right? If you ask a specialist what the weather is like today, it'll say, sorry, I don't understand that input in so many words. But then that would be the context for you to say, okay, well maybe it's not a generalist, maybe it knows more specific knowledge. And I think a lot of times you do have that context. It's explained to you right up front. Hi, my name is Clippy, I'm here to help you with Microsoft Word. And so Clippy's not going to know the weather, he's going to know what formatting things to help you with and so it's just a consideration I think for the most part we're good on that. I think design kind of knows what they're doing with communicating to the user in terms of which one we're talking to. Just a consideration. Do you want to get into this next one? Unless you have any other thoughts on Clippy? No, I do. Miss Clippy. Clippy was amazing with a certain degree of sarcasm. So I guess to get down and dirty to the engineering to this to a certain extent is where is this assistant hosted? So the ownership of the platform and the best way to describe this is when we are using the smart speaker that we won't mention because we don't want to start it off, but when we're using that device, most of the data that you provided on a day to day basis is hosted by that company. It's hosted within its servers, et cetera, et cetera, until you start using skills, then there's certain skills obviously there that are hosted by third parties or developed by third parties and the data is owned then within that third party. So from an interface perspective you perhaps think, well so what, who cares? Why should I be bothered? But when you start then looking at your privacy concerns, I mean everybody is concerned about now about how their data is being used, potentially abused, but actually also how that data be mined for in depth knowledge about yourself and about the type of people that you live with and how that can be used for people to make money out of. We do get concerned about that and then you get into the whole privacy concerns that you're sharing potentially quite personally sensitive data around. Spending around the conversations are going on in the background and this whole question you're asking it and then fundamentally against cybersecurity when you're picking up the issues around could your devices be hacked. Could your information be leaked depending on where the smart devices are as well. So what it seems a very boring thing compared to some of the other things we talked about. There is some specific human factors engineering there that we need to make users aware of where is that data being hosted, how is it being hosted and make sure they're buying into the potential risks because I think that's possibly a strong weakness that we have now in the current development of evolution. When you're using skills, when you can't see the differences, it's not as obvious when it's written down. Generally you can see one web page will take you to another web page and so then it will bring up another bottle. There is more of an onus to say when you input this data, this data is going to be shared with so in the EU we have GDPR rules and there is a known as there to tell you that this information is being shared with third parties and where this data is being stored, et cetera, et cetera. But with the audio versions of this, well, it's not easy just to put the small print on there is it's a single channel of communication, you can't just lose it, right? Well with the audio version, how do you communicate to people who I mean you could do all that stuff at setup, right? You could do the GDPR sort of when this system will provide access to X, Y and Z. Do you agree? Yes right and so you could do that at the beginning but if there's somebody else in your home that is giving them commands that wasn't around. It set up like. I don't know. Your wife. Your child. Your husband. Your friend that comes over there's a lot of different scenarios in which they may not be aware of what's being collected on them and for this to work responsibly. You can't necessarily do that. You'd have to do that for each one and that's not practical that no one would use them if you had to do that every time. Yes, I agree and so to get around that, some of these companies have implemented standards and best practices and reviews by which they go through in order to get them on their platforms. We're talking about the Amazon one in this case and Google as well. They have standards that allow them before but you also don't have some of that if you do like a homebrew one. If you made your own skill and launched it on your own one at home. There's no review for that because you're testing it in your own home and so it's up to you. The person to communicate that information to whoever else might be using it and that is where it gets a little tricky right you build a skill to collect information on the people that you live with or you build a skill that I don't know. I built a skill that would search through our episodes at one point. But I mean, there's no real privacy concerns. There is sort of the understanding that you need to have that information communicated and I don't know where else we go from here other than is a huge cyber security issue. Data is really important and let's keep having that conversation and build into law ways to protect us from that type of abuse. I'm going to get into this next one here. This is kind of the conversation initiative which is really looking at who is playing the different roles when we're taking turns in a conversation. And so if you think about this, it could be sort of the reactive versus proactive type of assistant. You have somebody who's reacting to something that you have said. This is an assistant where you say wakeward, I would like to have a sandwich delivered to my house. And so then it will say, okay, from where? What would you like on it? Right? And it is reactive to your command. Then you have a proactive thing where it's kind of giving you options ahead of time based on context, but you are not initiating that conversation. This is very much like when you visit a website and say hi, my name is whatever, I'm a bot here to help you out. Is there anything that we can help you out with? That is one of those sort of proactive situations and so there are very different instances in which you might want to use both of those. A proactive one would be very good thing to have when you don't know that something is present. If you can't see that there is sort of a device that is collecting your information, you might want to say hey, did you know that you could do this? And so that might be ones instance. Right. The reactive is also one of those interesting ones where if you just spout something out and you didn't know something was listening and it reacts to that, then that's a very different sort of situation. I know. Barry, what are your thoughts on this conversation initiative? This almost for me gets to the heart of where these conversations go, where this conversation technology is driving. Because at the moment we are very set on wake words and then delivering commands. But actually we have spoken about this before. There's going to be environments where we want a bit of proactiveness, we want a bit of nudging, we want a bit of understanding about where we go. So in the healthcare domain, particularly say with elderly people, we talk about loneliness at certain times of the day. We want the speaker, the system to just pipe up and say, well, how are you? What are you doing? How are you feeling today? Have you taken your meds them sort of thing about helping you provide prompts and understanding what's going on and if we get it to the point where it's good or much better in a contextual nature. So we now get the doorbells with the cameras in and so actually given a really good warning of somebody about to come to your door. It looks like grocery delivery. You might want to start getting up now because if you're in firm you might want to start getting up now or start putting a warning out there that you are coming to the door which needs to take a bit of time. And so starting putting some of these technologies together and making them really work. It's going to be quite interesting to see how this goes out because this idea about is it an assistant or is it an agent? Is it something that is there to be told what to do or is it something to be collaborative which really gets the heart of what the initial article was trying to get to. So I guess this idea kind of really gets to the heart of that. Is there anything else you've got to say around this or should we jump into the next one? If this is the heart then no, I don't have anything. No, I do have just a couple of points that I think can be tied into some of the other stuff that we're going to be talking about here and maybe we skip over a lot of the detail of the next couple just for time. But the one point I'll make here in terms of conversation initiative, you did mention that sort of in some circumstances I do want to mention that this is healthcare application, this is the loneliness, this is suggesting people getting care when perhaps they need it. I'm not going to rehash that whole argument. Go listen to that episode if you like. But that is one application we can think about. That application is it term as it relates to goals as well, which is kind of the next category and this is kind of the focus of the conversation here. When you think about goals you're really looking at sort of transactional experiences or relational experiences and these can be kind of like an example might be the goal is to get something done right. Let's get you concert tickets. Let's get you a sandwich delivered to your house. Let's get you X. Y and Z versus a relational thing is very much that agent side of things where you're looking at those AI girlfriends. They actually bring this up in the article right here. Replica is the software that was that story on and so that is kind of that relational interface or that relational goal that they have there as well. And so really with that the conversation is the goal. And so that's kind of getting at really the last point as well. But any points on the goal before we get into this last point? No, I think you've nicely wrapped that up really well. I think this last point around is very similar. It's around the depth of conversation that you're expecting to have. Are you expecting to have something deep and meaningful or is it just a simple adding a bit of direction so they classified as a single turn. So it is basically you say, switch the lights on. Okay, lights go on. Brilliant, job done. Or are you having to take multiple steps to derive enough information in order for something to happen? So to use the example you mentioned earlier, I want to order the shopping what do you want to on your shopping list? I want this on the shopping list. When you want to deliver it. I want to deliver it then. Okay, great. It's ordered. Thank you very much. So you're going through that multi step approach. So I think a lot of these things now will probably be an element of multi turn, particularly for going to that conversation approach. But I think it's still an interesting thing to work out for the engineering of it. Really. Yeah. I will say with that multi turn, there are going to be some human factors issues around remembering what you've communicated to the system and sort of what the system knows about you in order to complete that task goal, whatever it is that you're looking at. And so that's just something that we might have to communicate. Reminder. You've already told me. X, Y and Z. Right. So that's another consideration. There's a bunch more in our notes, Barry, but once again, we've kind of done the thing where we thought we won't have much to talk about and here we are at almost time. Any other closing thoughts on this before we get out of here? So I guess the thing that I was going to say, I'm very boring, but what about things like failure cases? How is that going to be communicated when it's gone out of scope for what the conversation is that you're expecting to have? More likely in a generalized sense because the system thinks it's in one part of a decision tree. You're talking about something completely different, like be married, perhaps. How's that going to deal with? But I think we do cover that on some other episodes. So go and listen to some of them and hear about some of the thoughts on there. Nick, what about yourself? You got any final things to play with? I do. There's one more point that I want to bring up that is kind of a key concern when you think about these text based chatbots and really that's how do you make sure that someone knows that they're talking to a chatbot and not a human? As these systems get more advanced, there's going to be more natural language used in them and they're going to be able to understand natural language a lot more efficiently than they do now. And so there's going to be a need to communicate to the person interacting with it that you are not indeed talking to customer service, you're talking to a chatbot. And so that's just one last point that will bring up. There's a bunch of other in here that we can talk about in the post show, but we'll get to that when we get to that. Huge thank you, as always to our patrons for selecting our topic and huge thank you over to our friends over at UX Collective for our new story this week. If you want to follow along, we do post the links to the original articles on our weekly roundups and our blog. You can also join us on Discord for more discussion about these stories and much more. We're going to take a quick break and we'll be back to see what's going on in the Human Factors community right after this. Human Factors Cast brings you the best in Human Factors news, interviews, conference coverage, and overall fun conversations into each and every episode we produce. But we can't do it without you. The human factors. Cast Network is 100% listenersupported. All the funds that go into running the show come from our listeners. Our patrons are our priority, and we want to ensure we're giving back to you for supporting us. Pledges started just one dollars per month and include rewards like access to our weekly Q and A's with the hosts personalized professional reviews and Human Factors Minute, a Patreon only weekly podcast where the host breakdown unique, obscure and interesting Human Factors topics in just 1 minute. Patreon rewards are always evolving, so stop by Patreon.com Humanfactorscast to see what support level may be right for you. Thank you. And remember, it depends. Yes. Huge thank you as always to our patrons. We especially want to thank our honorary Human Factors cast staff patron Michelle Trip. We have a couple of extra notes here. Did you know that we have a merch store? This is written by our treasurer. Did you know that we have a merch store? Some neat designs over there that include it Depends shirts, the show logo on hoodies like I'm not wearing tonight. It's hot, it's summer. Why would you write that? Other cool designs. There we go. That's summary based on human factors culture. Like, I'm going to Human Factors the beep out of this. If you want to support the show, look good doing it. We do have a merch store and you can always do things over there. Our pride logos on there too, which is actually really cool. And forever, all proceeds of that will go towards the Trevor Project. So let's get into this last part of the show that we like to call this came from

 

 

let's switch gears, get to It came from. This is the part of the show where we search all over the Internet to bring you topics that the community is talking about. Anything is fair game. If you find any of these answers useful, give us a like wherever you're watching or listening to help other people find this stuff. We got three tonight. First one up here is by Emma Anders. Emma andrea. I'm so sorry. Butchered that they say is Human Factors engineering design? I currently work in UX and love the design aspect, but I'm interested in getting away from purely digital experiences. Is Human Factors Engineering about just evaluation and research, or do Human Factors engineers design as well? PS. I'm still early on in my UX career. Forgive me for any lack of understanding. Barry, in your opinion, is Human Factors Engineering Design, or do you do design in Human Factors engineering? Oh, we do it all. So, yes, Human Factors Engineering side is very much the physical component as much as the digital component. It isn't all just UI and fancy groupings of nice bits of text. It's about the hard and money. Where is it? Can you reach it? Is it too heavy? Is it too light? Is it big enough? Does it have the right hand grips? Is it two person lift? Is it situated in the right place to the anthropometric of the user? That's easy for everyone else to say. Yeah, you get involved in literally everything that you possibly want to. And then you've also got the it's not just the physical aspects. Product management is your thing that gets involved into it as well, because it's how you schedule, how you engage with project teams. It's how you encourage software engineers, hardware engineers, and every other type of engineer that uses our thing. And they should be considered in all aspects of what they're doing. So human practice engineering is the coolest place to be. None of this UX nonsense. Engineering is did I sell it well enough? Was that good? You sold it. You sold it, Barry. Yes, it is. However, I can see where there's sort of a disconnect in some places. They will separate the research from the design, and that's okay. That's how those companies operate. I will say, in those instances, that you're still at least providing recommendations that are based in the research that would ultimately lead to decisions and design. So even if you're not sitting there and pushing pixels, you are still providing recommendations that ultimately will make it into the design that is, I think, where this separation is coming. But, yes, there's a whole subfield. Ergonomics is like, in my mind, ergonomics and Human Factors live at the same level. It's all the same thing. You're just doing it in different domains. Ergonomics is all about that in terms of the physical aspect of it. You're right. You could do user research, but then you also need to provide those recommendations for the physical aspect of things as well. So you need to be able to say like Barry said, he just listed off a bunch of them. Hand grips, one person lift, two person lift. I'm not going to go through that list again. But you see where even these recommendations are going to provide some baseline for how a thing is going to be used. And I like to think about human factors engineers as jack of all, master of none, but what's the full saying? I forget. Anyway, it's definitely you are using a multitude of skills to get to some outcome that ultimately is in the user's best interest. That is sort of the baseline. And if that means that you can communicate your thoughts, your recommendations through a design, then go for it. I think it's all part of the same thing. I don't know. Did I sell it? I think I think we just raised a really good point that we should maybe pick up on PO show or something like that. A lot of people think now human fact, because the evolution of digital interfaces, etc. For the HF and UX is all just about websites and apps and things like phone apps and things like that. And it's not about everything, not just that. And we should maybe get into that. Yeah. I will just say the full quote. A jack of all trades is a master of none, but oftentimes a better than a master of one. Okay, that's it. Let's get into this next one here. This one is by choicead 9680n UX Research subreddit. They say how to run concept testing that isn't worthless. Hey, all. For a while now, I've been running frequent concept tests for my company. Our goal is usually to evaluate the desirability of a concept. A session tends to look like this brief interview gather information about the problem space, followed by me showing sketches of storyboards, narrating concept, sometimes even prototypes. Then after we're done, they're acquainted with the concept. I asked them things like, what are your first impressions? What do you think of this? Rinse and repeat with a couple of concepts until the next session. However, I'm painfully aware that opinions are not strong evidence. The participant saying I love this isn't necessarily indicative of what they would actually like to use by or recommend. Plus, we know that participants usually struggle to be completely honest if they believe their answers might hurt the rapport with the researcher. So any advice or resources you could point me to? How do you run concept tests?

 

 

So for the type of engagement they're describing, actually, yes. If you read the literature, that type of approach will be panned and is regularly. But the only reason it's panned, actually, for the type of goal that they've outlined, what they're doing is actually what I would probably do. You're asking general questions, so what are you going to get? Is some general answers. If you're just wanting that warm fuzzy feel or call it the aura of the project, of the product. You ask some general questions and you get some stuff back. What you're missing is the fact that you don't know what it is you're asking and what it is you're trying to pin down. So if you've got the main reason to conduct engagement testing. Things like that is you either want affirmation confirmation that what you're doing is the right way so you can actually you should have an idea about the type of elements that you're concerned about. Hone in on them. Make sure we get some answers about out to them. Ask more pointy questions but spend some time making sure that the questions the right ones on the other end of the scale is rather than just asking generic questions. Get some measures involved. How are you measuring it, how are you measuring success? You might want to go down some mental work or testing depending on the type of thing that you're trying to do. But again, if you're using same mental workload testing then that means you've got a concern or an issue or you want to confirm that the amount of mental workload isn't becoming too much for the user. You need to focus on what you're doing. You need to know why you're asking the question in the first place and then you'll be able to get better answers. What do you think, Nick? Yeah about that what you said, I think the questions that you're asking are sort of to me the baseline question that you should have asked before you came here. You should know what they will think of this because you've analyzed their workflow and have identified the gaps in that workflow and therefore this concept that you are then presenting should patch some gap or fix some workflow that needs work. That's how I view it at least when I look at concept testing. You could also think of it as an entirely new sector of a domain or something that you're looking at that you generally don't have any knowledge of. And if that's the case, then you shouldn't be bringing concepts in terms of like I don't know, it seems like prototypes is something that they mentioned, sketches, storyboards shouldn't be bringing any of that. It should be more general in nature to figure out if this is even something that you should move forward with the sketches or storyboards to begin with. And so to me, yes, it's more broad in nature when you're looking at a new sector. But if you have done sort of some preliminary research like that broad investigation, you should know what generally they will think of it and so then you're more looking at a usability test or something.

 

 

The other point I'll make is that yes Barry, you mentioned that there's measurements involved a lot of times. You should bring some measurements. I think that is a good way to get some objective feedback. Send it out to a lot of people, see what people are thinking about in terms of where to go in sectors. I don't know. There's a bunch of different ways to go about it and I unfortunately don't think that this is the right way. I think you get the workflow first before you start asking for feedback on concepts, because ideally it should patch that workflow. Any other closing thoughts? I feel like I was a little harsh there. Yes and no. I mean, it is one of the things it is tempting to go down the concept phase early and think that you need to get really deep into it. But as you say, you should be presenting more than one concept particularly. I mean, I do this a lot if I'm in a new domain, if I'm uncomfortable, because sometimes just going in and doing a discovery where you're tell me about everything you generally don't do in my experience, I do a discovery phase and then I'll come back with some concepts. Like I said, patch some holes. I know that the concept is going to pass some holes because I'm quite frankly good at what I do. But what I'm trying to look at is to say, right, this is the way I patch that hole. But how does that patch fit in the wider ecosystem? What are the problems? Does that now throw up? And I use it as a tool for me to better understand the domain. So it's a tool. So I almost don't care about the concept. I know my concepts are good. They might not be appropriate given this next step of conversation.

 

 

Even though I sort of said you can use measures if you want to get something more specific, I still wouldn't be using your heavy measures at a concept phase as such. I still use them as exploration. But anyway okay, the big one, this one is a question that we get often, I would say behind the scenes, but we don't necessarily answer it enough on the show, I feel. And so somebody has posted a TLDR on the user experience subreddit for UXPA Salary Insights. This is by WGX zero. So again, this is looking at UXPA salary insights. They say, I thought I'd pull out some interesting details from the PDF so you don't have to. Let's just go through these one by one. Median salary of everybody, all respondents, $109,000. This is all US. Dollars. Maybe Barry can do the conversion. We will talk about UK numbers here in a second. Median salary for US only is 127 thousand dollars. When you're looking at the median salary of UK, we're looking at 78,000 US and £64,000. When you are looking at entry level roles, we're looking at about 75,000. When you're looking at mid level roles, you're looking at about 128,000. Senior level roles, about 140,000. Bachelor's degree versus master's degree difference in median salary is about 5000. Okay, there's a lot of numbers. Like I said, we'll have the link to this TLDR in the show notes so you can go and check out the original post for the exact numbers. Hopefully if you were listening, you kind of were picking up on where you fit in all that and to see where you're at. And I think this is really important to communicate because if you know sort of what other people are getting paid and you know sort of what your worth is, then you are more likely to request that at your next venture or even at your current venture to sort of help equalize that. Especially for maybe folks who don't get paid as much as the white male dollar. We'll just say that. Barry, what are your thoughts on this? Is this fairly accurate from the UK perspective? Are you moving to the US anytime soon? Well, if I could move out for just a salary but come back to the UK for my health care and the things that we get that are quite nice, I could live with that. It's interesting because obviously you said that. I have no idea. The US that's just sort of like play money for me it's dollar money. But the six, obviously median salary, UK. It's interesting what average, different types of average you use, because I've just done a quick search around average salaries and it depends, I guess, to a certain extent, where does human factors start and end? I pulled up one figure, for example, that the average human tax engineer and economists are in the UK is £85,000 or 85 a quarter, really. But the range it says here goes from 27,000 all the way to 156,000, which is fine. So that's one view. But then another view is actually the average salary for a human factor specialist or just under 45,000. So there's clearly lots of numbers to be bandied around. I think the problem with this in the UK is the number of human practice practitioners who class themselves with human practice practitioners is relatively small. We are a small pool of excellent people, whereas you compared to other types of engineers and things like that and much bigger pools, but also where you might still consider yourself a human practice practitioner all the way through your career. So if you stay in that human factors domain and you go all the way up and stay within that technical craft, then you finish as a senior manager or whatever and stay in that. But if you've been a human practice practitioner, what is quite common is for a lot of people to be human practice trained but then skew off into another domain. So they're not necessarily doing human practice skills, they've got a different title. So technically they're not human practice practitioners anymore, but they're just really good. They've transferred them skills really well because they do transfer really well, so maybe that affects it as well. So I have a problem with median salary. I think what is a really good figure is the entry level, mid level and senior rules because they're useful numbers. So when you're going in and trying to pitch for a job, at least your expectation is reasonable. I've got no problem with people coming and saying I want a job and I want 140K, even though I've only just graduated. At least having these out there, you can try that. I'll say no, but at least you've given it a shot. What the possibly worse is you've got some really good skills and you come in and say, well actually I want to salary 78 kwh, but actually you're probably worth 140 at that level getting it right. So we do need to talk about it more. The whole bottling down of salary is treated as a dirty discussion and it should be taboo. Yeah. I will say the other piece of this puzzle that is sort of muddied, I would say, is that you have User Researcher being 61% of the respondents, and then below that you have User Experience Designer at 45, which those numbers are already not adding up. So I don't know.

 

 

Oh, they were selected multiple titles. Okay. User researcher user experience designer interaction designer. And so I think when you even look at the difference between research and design, there's quite a difference in some cases. Anyway, we should move on because we got One More thing. Barry, this needs no introduction. What is your One More thing this week? So my One More thing is I was on hold here last week. We were on hold last week, it was great. But a weird thing happened for me where I was on our first full day of camping. Normally if I have a nap during the day, I can't sleep at night. It's like a law. If I fall asleep on the sofa, that's my night time ruined. On the Wednesday we went camping, woke up in the morning later than I usually do, which was fantastic. I was like, I feel a bit tired midday. So about 11:00 I then had an hour sleep, woke up for lunch. Then about 02:00 I fell back asleep again until five, woke up, had food and I was like, oh, I'm clearly not going to sleep tonight because I've just not only napped once, I've napped twice. 09:00, I was back in bed asleep, stepped through to the next day. I haven't had that since I was probably about five years old. It was amazing. I had a lot of sleep that day and that evening, the next morning I was buzzing. It was brilliant. I wish I could have your energy. Let's see my one more thing this week. I got a PS Five a couple of weeks back, which it's been out for like about a year and a half and it's just been really difficult to find one. Finally found one and it's cool, I guess. I don't know, I'm getting it for the exclusive, I think there's some other technology out there that's probably better than it, but I'm in the PlayStation ecosystem. It's fine. It's fine. You're not missing much, I guess, for those who don't have it yet. Anyway, I have a PS Five at me. I don't know, talk to me on Discord. That's it for today, everyone. If you like this episode and enjoy some of the conversation, I guess, around AI that can chat with you, I guess there's a couple of things you can do, right? There's the episode 249. Let's talk about Google tension. AI. And then there's also the one where we talked about putting robots into old people's homes. That was 251 comment. Wherever you're listening with what you think of the story this week, for more in depth discussion, you can always join us on our Discord community. Follow our official website. Sign up for our newsletter. Stay up to date with all the latest Human Factors news. If you like what you hear, you want to support the show, there's a couple of things you can do. One, you can leave a five star review that's free for you to do. Two, you can tell your friends about us that is also free for you to do. If you want to throw money at us, you consider supporting us on Patreon. That will get you access to Human Factors Minute, which is something we put a lot of time and effort into. As always, links to all of our social and our website are in the description of this episode. Mr. Barry Kirby, thank you for being on the show today, where kind of listeners go and find you if they want to talk to you about getting in touch with that Digital Barry Kirby you brought up a couple of months ago. If you want to go and talk to Digital Barry, then you can find me on Twitter, Miscorcan, the Cross or the social media. I want to use some of the interviews we've been up to that mentioned the top of the show. Then find twelve or two the Human Practice podcast, which is a twelve or two podcastcom. As for me, I've been your host. Nick Rome. You can find me on our Discord and across social media at nick. Score. Rome, thanks again for tuning into Human Factors cast. Until next time. It depends.

 

Barry Kirby Profile Photo

Barry Kirby

Managing Director

A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.