Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
Nov. 4, 2022

E263 - Talking to dead people is about to get real

This week on the show, we talk about technology that lets us “speak” to our dead relatives. We also answer some questions from the community about stepping on a Product Manager's toes, the practical difference between UX, HF, and HCI in the workforce, and advice on pushing back against dark patterns.

Recorded live on November 3rd, 2022, hosted by Nick Roome with Heidi Mehrzad.

Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on Proactive Learning - An interview with Dr Marcin Nazaruk:

News:

It Came From:

Follow Heidi:

 

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

Transcript

Welcome to Human Factors Cast, your weekly podcast for human factors psychology and design.

 

 

Hello, everybody. Welcome back to another episode of Human Factors Cast. This is episode 263. We're recording this episode live on November 3, 2022. I'm your host, Nick Rome, and I'm joined today by Heidi Mehrzad. Welcome back to the show we had you on last week. Couldn't stay away. Hello. We got a great show for you tonight. We're going to be talking about technology that lets us speak to our dead relatives. And we'll also answer some questions from the community about stepping on a product manager's toes, the practical difference between UX human Factors and human computer interaction in the workforce, and advice on pushing back against dark patterns. But first, some programming notes. Community updates just if you're in the United States, election Day is next week. Go vote. First and foremost, go do that. That's really important. Also next week, there's not going to be a show. We're going to do a short hiatus next week. On the 10th no show, I will be in a cabin in the woods. That's part of my one more thing, but we'll talk about it then. If you're unaware, we did a ton of info. We did a ton of stuff out of Human Factors ergonomics Society annual meeting, and we've now compiled a blog post recap of everything that we've done. It's out there on our blog. Please go check it out if you haven't already. And we also have our October News roundup for anyone who wants to follow up with the news. Barry is not here with us today. He's out on holiday. But if he were here, I'd ask him how twelve two is going. And he's still got the latest from Marson Nazarek on Proactive Learning. His interview with him is still up on his channel. Go check that out. He'd tell you all about it over there. But good stuff. I listened to the interview earlier today, so I knew what I was talking about. All right, well, with that, why don't we get into the part of the show we like to call

 

 

Human Factors News? This is the part of the show where we talk about some interesting things human Factors related. Heidi, what's the story this week? This week, the story is technology that lets us speak to our dead relatives has arrived. Are we ready for it? So this week, our story is all about talking to dead people. Well, sort of. Technology like this, which lets you talk to people who've died, has been a main stay of science friction for decades. But now it's becoming a reality and increasingly accessible one, thanks to advances in AI and voice technology. MIT Technology Review has published a piece on talking to voice assistants as dead relatives. Constructed by the California based company hereafter, AI empowered by more than 4 hours of conversations they each had with an interviewer about their lives and memories. The company's goal is to let the living communicate with the debt. It's not hard to see the appeal. People might turn to digital replicas for comfort or to mark special milestones like anniversary. At the same time, the technology and the world it's enabling are unsurprisingly imperfect. And the ethics of creating a virtual version of someone are complex, especially if that person hasn't been able to provide consent. For some, this tech may even be alarming or downright creepy. But there's something deeply human about the desire to remember the people we love who passed away. We urge our loved ones to write down their memories before it's too late. And after they're gone, we put up their photos on our walls and we visit their graves or their birthdays, and we speak to them as if they were here. But the conversation has always been onesided. If technology might help us hang out, hang on to the people we love, is it so wrong to try? So, Nick, what are your thoughts on this?

 

 

I'm all over the place with this one. My initial thought is this reminds me of a Black Mirror episode that we've actually talked a lot about on the show they mentioned in the story. And Blake and I, back in the day, actually did a full commentary on for our patrons. We did a sort of watch along for this. But at first glance, this technology seems like an awesome idea, right? Help others preserve the memory of loved ones. And at the base level that sounds great, but as a human factors practitioner, my mind immediately starts trying to solve for some of these worst case scenarios, many of which we will talk about. But I just think this thing is riddled with maybe more problems that we need to solve than it has solutions for. And I'm wondering if it's worth it. But Heidi, what are your thoughts on this one? So I actually, like I said in the pre shoot, this is a very personal one for me. I suffered a lot of grief over the last couple of years, a lot of loss. I look at it from a pure human factor's psychology point. Psychology is more important for me in this aspect. And I simply have a very strong and firm belief that you should not be delaying the human reaction, which is grief, which is dealing with the pain, which is dealing with certain things. Because what I see with this is people in the very initial stages of grief using this as

 

 

not replacement, because you can't replace, right? But as a hold on moment thing that delays grief. It furthers isolation because they will eventually be in a situation where they want to have that. But you can't take it somewhere, right? Like, you can't take this in your pocket and go somewhere, right? Or can you? Well, there you go. These are the things, right? And social communication, the number one thing you need during this time is other people around you. Right. You need support, and that support will decrease if you keep focusing on this. Right. And then, worst case scenario, disassociation from reality. Which reminds me of the Black Mirror episode as well, right? So how the urge to constantly do more and do more and bring this thing to life eventually, right? But at the end of the day, it's not real, so there's no replacement. So eventually you will get to a threshold where it will come crashing down on you. And I do wonder if, with all this technology, are we not asking for more issues in our psychology when we already know that social media itself has already destroyed the minds of an entire generation? Right? What are we trying to do, just destroy everybody? And then there's a creepy part, but that's for somebody else to speak about because it's just me theory is already creepy. I'm one of those people who literally doesn't talk to their phone and doesn't have an Alexa and doesn't have a Google, and I will never, ever give commands in my house to a robot if I can prevent it. I don't know. What do you think about the psychology part or give any other thoughts about it? I mean, I think the psychology part is kind of key to this. Right. You're right. There's a lot going on in a person's head in those vulnerable states when they've just lost someone, when a lot of things are changing for them very quickly. And this, I think, can be useful to some but harmful to others. And maybe that's a good place to start the discussion, is what types of people will benefit from this and what types of people will be hurt by this? And that's actually one of the points that someone in our lab, katie brought up in our private lab chat earlier today. But I'll elevate it. Right. It'll be helpful for some, hurtful to other people. And how do we screen for that? There's sort of this screening process, I think, that needs to happen to understand who is ready for this technology and who isn't. And how can we tell who is going to benefit from this technology? Who is going to suffer even more because of it? And really, that comes down to the question of are we ready for this in society? I don't know. What types of things do you think people are looking for when it comes to this technology? Or I guess what should we be looking for in people who are looking to use this technology? Like, what aspects, attributes? Well, with Katie points, right. For me, we as a society currently just taking a snapshot. Pandemic, politics division, social unjustices everywhere, injustices everywhere. Right. I don't see our society being in a mental health place where we should bring this upon society right now. There is so much grief around the world because of the pandemic. I don't see this as something that would be influentially. Positive. Now I do see, I'm not sure if you want me to expand on the point Katy made. Of course. I brought up a movie in this, right? A movie reference for PS I Love You, which is a movie with, if I remember right, jerk Butler and Hillary Swank. It was a cute movie in the early two thousand s, I think. And in the movie her husband dies, and in the movie she received letters. He had scheduled letters because he knew he was dying. So she started to get letters after he passed away. And in the movie they make it look so these are the things that are helping her to heal when it really is just the actions that come from the letters. So that's the mystique around it for me, because sometimes, let's remember, it's a Hollywood movie, right? So a lot of it is not based sometimes in true reality, in a reflection of the world. So, yeah, in the movie, she heals through the letters, quote, unquote. But I think having gone through grief myself pretty recently and still going through it, I think there's an element of like, should we consider psychological evaluations for this? Yeah, that was one thing I thought about. I think you need to be evaluated in order to be allowed to even have something like that. And even then it can cause other things. It's not a guarantee, but I don't think anybody and everybody should have access to something like this. No. And then that opens up a whole other can of worms, too. It's like what happens if you do get evaluated or you do go through the screening process and whoever it is on the other side of that deems that you are unstable or not unstable, but you are unable to process this in a way that's going to be healthy for you. How do we sort of let them down gently or communicate to them that this is going to be harmful for you and it might feel like we're trying to keep them away from their loved ones. That in itself could be an entire process that is damaging. And so that process then needs to be looked at from a human factor's perspective as well. And it's just like, where does this end from? Where does the Pandora box end? Right? Exactly. There's just so many different things that we could open here. But I think overall there definitely needs to be some sort of regulation for sure around this. And I will bring up another point that Alex and our lab talked about here, about maybe potentially one of the uses here when it comes to health care and specifically memory care settings. Right. Alex says especially as people turn sorry. Especially as people around them age and pass away in ways they cannot understand. I've been firsthand at the moment, someone with dementia is explained that their life partner passed 15 years prior. It's horribly traumatizing for such a short moment that they won't recall later, but they will still experience the effects of the trauma. It obviously goes into deep ethics questions, but as a tool to maintain safety and comfort, it will be invaluable. So there are applications in which something like this could ease those moments where maybe memory is not permanent or semi permanent. There's some memory issues to where those moment to moment interactions will provide a little bit more relief. Right. This is where the ethics question comes in, is because do you implement this for those moments alone? Is it ethical to not necessarily communicate that their partner is still alive or anything like that? But will it help sort of ease that moment if they're able to communicate with them? I don't know. These are questions I don't have the answers to. This is a lot of really deep stuff we're jumping into right away. Yeah, I know. Well, that's what the topic is, right? It is so all encompassing because I could now bring up what about data privacy? What about the person never being able to give consent to this? I mean, not to be super dark humor here, but I am German and Iranian after all. My mom would slap me across the face if I made something like this with her. Right. I know she would never approve of this. Right. So that's not fair. And I feel like that also gets almost in a category of disrespectful it could towards a human who has now no choice, and we're giving them no choice because they're gone, and we're reviving them in a who knows creepy way. I mean, I'm going to go ahead and guess the initial versions of this aren't going to be perfect, so there's going to be glitches in it. I mean, let's remember how Siri used to be and how the first Alexa was, and it gets into that can of worms for me. And then as far as the dementia and the Alzheimer's go. I'm not sure it would help. Honestly. Because trauma triggers a physiological response. And that physiological response is not preventable even if you don't mean to. Or even if they have moments where they realize that they're not there anymore and that the things that they believe are not true. Whether or not we can AI some of those moments away. Which I don't even know how you would do that because what does that mean? You show them a picture of themselves, or you show them a loved one who passed away previously, but then you still have to tell them that that's a robot and that they are passed away. Right. So again, this brings me back to the delaying of everything. You're delaying the process. You're delaying grief. You're delaying confronting it. You're delaying with it. Right?

 

 

Yeah. Alex brings up a great point here. Something frequently occurring in memory care is stuffed animals with voice boxes of their loved ones or deceased persons. Okay? So I think the idea here is that this might take the place of one of those, right? This could be an AI voice box. But I mean, you're right, that does open up a whole bunch of ethical questions and things that we're not going to answer on the show today because we don't have the time and frankly credentials to do. No, I was going to say nor am I an expert in this field. And so, you know, I think I do want to jump back to consent, though, because you bring up a good point and with this story specifically, they are using the consent of people who are alive, right? This person who wrote this article, this person's parents were interviewed for 4 hours, meaning that they had consent and everything that they told them was kind of reviewed and they use that as their data set. I think the consent becomes an issue when you have that black mirror scenario. And really this is almost like required reading or watching for this episode. Is that be right back. Episode with Donald Gleason. Haley atwell where basically someone dies and they have to digitally recreate this person using existing communications. So like text messages between them or like videos that they took or pictures that they took. And so using that as the data set, that maybe that's where I think the consent issue comes in, is if you're using it without their permission. However, if it's when they're alive, I think that becomes a lot easier. It could almost be like in preparation, like a will type of thing where, you know, I give permission to this company to create a digital version of me for these people, these specific people to use after I am gone. And that could be interesting too. And that also opens up another question that I'll get to in just a moment, but when it comes to consent and all that stuff, how do you monitor violations of that consent? What happens if somebody else interacts with it that says this person, this person, this person are able to interact with my digital self but somebody else is over and they say, hey, whatever, voice assistant, let's talk to grandma and somebody else is in the room with them. How do you deal with that scenario? So there's the consent question, but then there's also I was mentioning the interview piece, right? This is what people who are alive, when do you interview somebody? Do you interview them in their prime when that is what a lot of core memories of that person will be about. And when you think back on experiences with that person, you'll likely think of the good times, the good memories, them in their prime or do you interview them end of life when maybe some cognitive function has broken down. At that point, their memory of certain events will be skewed because of the time between those events and when they are doing this interview. It really is a question of when do we do these interviews and when are we hoping to sort of capture what a person is like. And then also if you do it too early, you don't get the history with that person. Right. If I were to do an interview today, or the wisdom or the wisdom, if I were to do an interview today, I would have three years of history with my son and I wouldn't be able to talk to him about memories that we have when he's 5710. But he would get me in. I think I'm peaking right now, maybe, I don't know. But I think there's that to deal with, too, is like when do you talk to somebody, right? And so that question of when do you interview people is another thing that I was contemplating. I think there's probably a prime time to interview somebody. But well, see, I look at it a little. That's interesting because I had a thought let me see if I can connect that, because it connects a little bit to that. So I had a thought about how with social media and currently we've already created a society, global society that lives just like FOMO thing, right? So is this yet another thing that we're putting upon ourselves, forcing upon ourselves to make sure you get this info in before you die? I'm not sure if I'm going to bring it back to what my point is, but it's this thing about not doing something and then missing out. But the thing that we're doing is for an experience or a thing that is not even here yet. And it's there. And in this case, you're dead. So why aren't you enjoying your presence instead of focusing on these things that are going to be after? Right? So, yes, my end thought to that is, yes, it sucks when your loved ones passed away and they didn't leave a lot of videos, a lot of voicemails, a lot of texts, a lot of emails, not even a lot of pictures because that's just not who they were, right? And then I hear stories of I don't know if you ever heard of this thing, some parents have started writing their children an email every day or every week they're alive. And then when they're 18, they give them this long they give them the password to the email box inbox and then they have this long history of what happened in their life and what their parents said and all that stuff. And that sounds wonderful and sweet and yes, if you have the time to do that, great. And if you can think about that every day, great. But this fear of missing out that if you don't have this data before this person or you die, then you don't have the opportunity to do this. I think that creates for me another pressure that I don't need from technology. That's fair. I mean, I'm thinking of it more of like I'm anticipating that people will think of this more like a will. It's something that you could do and are maybe advised to do for a will. But is this advisable that's TBD but something that you build in for ensuring that your loved ones are taken care of post mortem, right? Sure. This is obviously like something we haven't even touched on right now. Are the cultural values associated with a technology like this? Some people are going to get really offended by this technology. Some cultures. This is like completely taboo. But we're here. We're talking about it today. So I just want to mention that there are situations where this will never even come up. The world is changing. Yeah, we can see. I mean, there's a couple of extra things that I want to talk about here and there's really no good way to jump into it. I'm just going to jump into it. There was mention of nefarious purposes. I'm going to bring up another comment that Katie made in our discord earlier today. My biggest comment about it is as cool as it sounds to record hours of your loved ones talking about their lives. One of the things I love with people I'm close with is hearing their opinions about things like current events, problems in our lives or whatever comes up. A lot of that changes depending on what else is going on. All their experiences regarding whether whatever we have talked about or whatever we are talking about. Having stories is a fun concept. Just wonder if the ability of the tech to share opinions and give advice might come across as manufactured or could even be manipulated. Calling back reference to some of your cybersecurity concerns last week, Heidi. So what happens when somebody hacks a dead person's artificial intelligence to manipulate people into giving to manipulate those AIS to give bad advice, like investing advice or writing somebody else into their will or something like that? Or into giving them PII that they could then read some of the input that somebody has given? There's a lot of PII cybersecurity issues going on with this whole concept that really we touched on some security issues, but this is like a whole can of worms that we could open. I just wanted to open it and kind of put it on the shelf because there's a lot of ways that we could go down with that one. Yeah, the deep fake. Absolutely. The deep fake that already exists. I personally, not to make fun of it at all, but this is truly I personally already don't like that there's facial recording everywhere, right? So

 

 

if people think they're not already recorded somewhere, they live in a delusional world. So you already exist on some database with your face, with your facial recognition connected to your credit card connected to your name, connected to your GPS points, like you're already there. Right. So for me, the danger in this is, again, for me, the deep fake. You mentioned investment advice. You mentioned giving wrong things, but I could even take it further. And I don't want to make the episode too dark, but I could take it further. Further. When people are grieving, there's a lot of cloudy darkness in their minds, right. So one wrong word from this spot, or whatever you want to call it, can trigger not just traumatizing, but critical, life threatening situations. Right. So that alone with a cybersecurity issue. Gosh. We're just asking, with all this technology, I feel like we're just asking for it. We're just asking for the what was the robot's name in the movie that set with Will Smith? Like, what was that? Ann? Was it Anne? I'm drawing a blank. Yeah, I know what you're talking about, but we're just asking for it. I'm just waiting for it at this point, to be honest. So I don't know that we can control cyber security. I mean, we can't even get data privacy laws in place because we could bring it to a political corner, but because our senators are so old that they don't even know what a password in some instances is, we have these people arguing in Congress about data privacy laws on social media platforms and data harvesting and all those things. How are they going to regulate this? I don't know. Great tie. Back to politics. Go vote. You brought up a couple of really good points. Right. People are deeply vulnerable in the stages after somebody has passed somebody important in their lives. And so there must be some sort of other regulation or control over what messages are being sent out and the use of these things. Right. Because, yeah, you're right. One wrong word and they could go off the deep in themselves into an even deeper depression or something triggers a memory within them or you name it. Right. You also brought up the robots. Right. That opens up the question of how far is too far? Where's the line? Right, exactly. In that episode that we keep referencing. Right. They kind of go through the stages. Right. You have a chat bot, and then from a chat bot, you go to a voice that's defeated, and then I don't know if they ever go to video, like a video call. But then the next step is sort of automaton that looks like the person that is infused with deep fake voice and the personality and so, yeah, how far is too far? I think both of us are in agreement that maybe the first step is too far. The chat bot stage. Right. You're bringing back anybody? I think there are better ways to honor the dead and to bring back some of those memories, even if it's just like, you know, I think a video interview would probably be better because then it's factual, it's record. Somebody is just relaying their thoughts and opinions to somebody else as they are not some artificial intelligence trying to interact with you. It's just a record. Rather than that. Do you have any other last thoughts on this? I could talk about this all day, but I think we got here. My last thoughts would be two things. The video you bring up was one of my thoughts too. If this is something that people desire, I think companies should look at creating pretty little digital portfolios that have a bunch of videos in it and categorized by topics. You can go in there, you can remember like a funny thing. You had your mom, your parents or whoever, your husband daughter, whoever can tell you a story that brings joy to your face. Right. That I could see really being something stimulating in a positive sense. Then, just like you said, just telling stories or just telling memories and being it being joyful memories, not interacting with you and answering to questions you might ask, that is dangerous. The other thing that I have with that is also when, as a society, are we going to stop and recognize that we need to stop developing technology in this space, in this pace without regulating it? When are we going to catch up? We are still so behind. I mean, we're not even on Europe level yet. At least they have a data privacy consent form on their website. I don't know if you ever listeners, if you ever travel to Europe, if you Google something, the first thing that comes up is you have to give consent. You can't go to the website without giving consent. It's now a normal thing over there, right? So we're not even there yet. So we're lagging behind that and we want to create chat bots with our dead love once in it. That would be my last word on that. As with social media, I always bring it back to that. But it really is a thing. Look at all the data harvesting that went on over the last 20 years. Nobody gave consent. I don't remember at any time as a 16 year old giving consent for my data to be stored somewhere. But it was and now we can't get out of it because there were no laws in place back then and there are no laws in place now. We don't protect users, we only protect tech companies with the laws. So that for me. If we could figure out how to protect users, maybe yes, no, I'm right there with you. Go vote. The last couple of things I'll bring up here is there's so much that we didn't touch, right? We didn't touch the UI of this. How do you interact with it? I'm just going to open up a bunch of different can of worms. You could do voice assistant, you could do chat bot, you. Could do other sort of physical manifestations of this thing. How does the AI handle any novel input when it doesn't understand what to do in a situation? How does that respond? There's some other questions that I have around putting the AI into different modes with different goals. So, you know, getting somebody out of a grieving stage could be like one mode. Providing someone with knowledge about that person life could be another mode. Or even companionship. We talk about AI companions on a show a while back, but can you turn a dead person into a companion? And should you as a whole other question, and then can we turn Grandma into a voice assistant? I think that might be kind of novel, but those are a bunch of different can of worms that we don't have time to talk about. If you want to keep talking about this stuff, please go join our discord where we can continue this discussion. Thank you to our patrons this week for selecting our topic. And thank you to our friends over at MIT Tech Review for our news story this week. Like I said, if you want to follow along with all the stories that we post, we do post the links to the original articles on our weekly roundups on our blog and join us in Discord for more discussion on these stories. We're going to take a quick break. We'll be back to see what's going on in the Human Factors community right after this. Yes, huge. Thank you, as always to our patrons. We especially want to thank our honorary Human Factors cast staff patron, Michelle Trip. Patrons like you keep the show running. Seriously. The studio looks like this without you. All this doesn't work for audio, but there you go. It looks like that without you. All lights on. Joke is getting old. I need a new one. But especially, I want to thank our honorary Human Factors Cast staff, Michelle Trip. All right, let's talk about this is the part of the show where we usually bring up some other stuff that we have going on. Treasurer rang and said, did you know that we have a merch store? They wrote this. Some neat designs over there include it the pin shirts, a show logo, like the hoodie I'm wearing. I'm not wearing it. Sorry. You need to tell me that I need to wear these things. Other cool designs based on Human Factors culture. Like, I'm going to Human Factors the shit out of this, I think is one of those shirts. There's no such thing is huge or error. And of course, our favorite five star review. Not five star we've ever got. If you want to support the show, look good doing it. We do have a merch store. Go check it out. Alright. There you go. Treasure. Are you happy? I'd like to get into It Came from now. Let's do it. It Came from it Came from that's right. This is a part of the show where we search all over the Internet to bring you topics the community is talking about. If you find any of these answers useful, no matter where you're watching or listening, give us a like to help other people find this content. It's all about those algorithms, folks. So let's get into the topics tonight. So we have a couple here. This first one here is from Stay Super Clean on the UX Research subreddit. The title is, quote, don't make concrete recommendations. You're stepping on PM's toes. All right. I had an interesting conversation with a PM earlier and I'd love for folks to weigh in. I created a detailed document with research methods, approach, and recommendations for a large project that just wrapped up. Most of the recommendations in the report were UI related, since users encountered serious issues with Usability. I met with the product manager today to talk through this. He said my work was high quality, but I should avoid making concrete recommendations since that's his domain. What the hell? Is this normal? Is he powertrooping? Any advice? I'll also mention that I'm new into UX research. Came from an adjacent field where researchers are empowered to make concrete recommendations. Heidi. Is this PM driven? What's going on? Oh, I have a lot of comments to this. I'm going to completely sideswipe the gender thing and the age thing on the authority thing, because this is something a PM would say to a new person, to a junior person. They would not say this to somebody like me. And I think there is where the crux actually starts. I think this experience is, yes, very typical for PMS. I do understand the whole situation around it. We could speak about that. But first and foremost, I want to bring this to the forefront because that's the first thing I reacted to was, is this a man telling a woman? A woman telling a man? Is it an older person and a younger person, junior person and a senior person? Like all these things weigh in. What's the dynamic? Yes, in the dynamic. Because a PM saying something like that, like I said to me today, I would be like, all right, cool. Next time you create one of your little project things and projects and whatever in your timetables, I'll point out that I can do that better than you. I think one thing that I would say to this is there needs to be established boundaries, right? There needs to be clear roles of who does what. And that is not the PM's decision, despite of what everybody thinks, it is not the PM's decision. It is the whole project team's decision. Right. What is the goal of the project? Who actually ordered the project? Who are the project leaders? Who are the people who set the goals for that project? The objectives for that project? It always comes oddly. It's just how this world works. It comes from above somewhere it was ordered. So I would revert back to that situation and seek out your managers or your leadership team that is in that realm of that project team. Maybe even somebody who is outside of that project to get advice. To have a conversation and understand clearly what this PM might mean. Because it clearly cannot be that he or she is making design recommendations as a project manager. Because that's not their role. So to bring it back nick yeah, he tripping. That's what I had, too. I think my simple thought is that product managers make decisions, not recommendations. They take recommendations and make decisions based on those recommendations. And if they don't take your recommendation, that's one thing. But if they say not to make recommendations, that is encroaching. They're stepping on your toes in that case, because that is your job as UX Research. As a researcher in general, I think it's pretty cut and dry to me. PMS make decisions, not recommendations. Researchers make recommendations, not decisions. And although we can certainly influence those decisions greatly, I think you're absolutely right. The whole power dynamic piece is a really important part of this etripping. All right, let's get to this next one here. This is by the Quantum Lady on UX Research, subreddit intersections between UX human Factors and humancomputer interaction. What is the practical difference in the workforce? All right, they go on to write the last several months, I've been on my own personal career journey, trying to figure out my next step. So it's got me thinking about a lot of titles we apply to our work. I've been looking more deeply into UX research because I've been doing what I consider UX relevant work for the last several years, but don't have formal education or knowledge in that area. The work I do is a combination of hardware and software, specifically in VR. Someone recently suggested that my work may be more human factors rather than UX or possibly even HCI, since VR headsets are essentially wearable computers. I've dipped my toes. I've dipped my toes into a lot of different disciplines throughout my career, various physical sciences, and enjoy interdisciplinary work rather than becoming an expert on any particular topic. My question is, what's the practical difference between UX Human Factors and HCI in the workforce? I know there's definitely some overlap. Can anyone shed light on this? Heidi? What is the difference between these three? My favorite topic, first thoughts, initial thoughts is human Factors. Let's be very clear about titles and roles, right? Because what is the difference in these? We can establish that, but the problem is also that these titles and roles are interchangeably used everywhere, right? So you might be called a Human Factors engineer, but you might be working as, like, more a UX researcher on that particular team. Or you are maybe more involved in the HCI component because you are working within a company that is software focused on the interface. So now your role is to be more HCI focused on your work, right? So it depends a little bit on the role within the company, what the product is you're working on, or the service, the product, whatever it is, whether you're in consumer products, whether you are in medical products and all these things. From a human factor's point, I apologize, I only have the medical part and the aviation part, which both were regulated industries. So from that point, I think Human Factors is a more technical role in a sense that we are there to apply the science of human factors, right? Because that's ultimately the science of it. Right. UX is a part of it, Usability is a part of it, HCI is a part of it, they're all components of it. But Human Factors is the science over covering everything and you are there to apply that science. So that with that in mind, human Factors in general, yeah, it focuses on the interaction between human and something, right? Machine, product, interface, whatever it is. But at the same time you're also trying to achieve certain goals and that's where these underlinks come, right? You're trying to achieve a user experience, you're trying to achieve usability, you're trying to achieve a smooth human computer interaction, touch point. So those are all those things. So that's what I think is the difference where you are more focused on one part of the science in those particular roles, where if you work in Human factors per se, my job as a human factors expert is to look at something and ensure that it is safe and effective, right? Because that is one of the goals of Human Factors. Right? So my job actually involves risk management, right? My job actually involves doing analyses, assessing harms and hazards and all these things where that as a HCI person who's more focused on the interface, right? They might not be so enthralled with the risk management side of something when you're developing a product. Right? So that's for me, that's more the difference in who does what and what role specifically. And with that as that being the human factors part is more the umbrella. You could end up doing anything and everything under that umbrella, even Ergonomics in a sense, right. The physiological part where if you work in a specific field, you are more focused on that specific component. At least that's how I see it. And I think I'm pretty spot on with it, but I'm not sure. Nick, am I wrong? No, I think you're right. And I think we only differ in a couple of places, but not even to the point of where it's worth arguing about. I think generally Human Factors is that umbrella and then you have sort of UX and human computer interaction that fall underneath that umbrella, right? Where human factors and Ergonomics is designing for humans. That is kind of at the high level, right? That includes safety, that includes interface, that includes Ergonomics, that includes everything, every aspect about designing a product policy procedure for a human. Underneath that umbrella you have UX, which is taking user needs into account to design for products, to design products for humans. But there's little to no standardization when it comes to that piece. There's not a whole lot of peer reviewed journals out there. It's like scrappy. It's kind of the scrappy arm of Human Factors where people are just trying to do something to get feedback from users. And you can do it with more or less rigor, depending on the requirements. But UX is often at the mercy of scientific research to push that field forward, where they're just kind of using the methods and techniques to find out some information about the users that are using their product and to then make recommendations based on understanding of those users. It's kind of tech oriented, although there are other companies that have UX in their title that don't necessarily do interface. Now, Human Computer interaction is kind of focused on computer based systems, right? It's a subset of Human Factors and Ergonomics, but they're looking at computers, honestly, in this case for this person. I think they kind of do a little bit of everything, but I think mostly it's probably Human Factors works since you're looking at kind of a VR environment and there's more context that I left out of the overall post. But they're looking at like safety and things outside of the environment, too. And I think when you come outside of the interface itself, that is when we're talking about Human Factors applications. All right, we got one more here. We got one more here. This one's by adventurous Key 5423. That's fun to say on the User Experience subreddit. They say advice on pushing back against darkish patterns. They write, Some of my team stakeholders want to improve on our site search by removing a search bar. Their idea means that stakeholders literally don't want users to enter keywords. They prefer users to dig through our heavy navigation and possibly irrelevant category generated links. Our stakeholders defend this idea by saying users don't like to input text and would rather click through links and navigation to where they want to go. Despite being very contrary to our historical data, in my opinion, this is a misguided and possibly gray or dark pattern. Not having text input to improve search results doesn't make sense if one accounts for how users typically use on site search to find products. I'd appreciate any feedback on this to help me through it. Is it worth pulling my hair out? Heidi, what do you think? Yeah, no, that's my response. Do you know that movie? No. I think with this you're just asking for it. I mean, at this point we have a certain behavior. We have developed and evolved in a certain behavior. We have customized certain things. Yes, but we have also stuck with certain things that people have now become almost they don't like the word so much because they think it's misused too much intuitive. Right. There's a lot of intuitiveness behind this. Right? We are now used to this. Yes. I could argue the other way, too, that, you know, change is positive and change is not always bad. And things I eventually got used to my iPhone now having no home button, let me tell you, that was a two year struggle. But that in itself, for me, is just asking for the user to be dissatisfied, first of all with your product. So you're going to have to have a huge plus on the other side of the mountain for the user to accept the seat, climb up the mountain, if that makes sense. That's how I see it. I don't see users automatically preferring this to something that they're already used to. And for that, I mean, things that are now come to you naturally, almost intuitively, right? Like, I bring up I thought when I first saw this, I thought about, like when I drive a car, right? I don't have to think about using the blinker. I just do it right. The same. As funny as it sounds with the search bar, I don't even bother to correct my typos in the search bar anymore because I know Google will correct everything for me. And then if I need to adjust, I go back in and I change a word and boom, I have other results. So I don't know, some things are good. We're not trying to reinvent the wheel here. I don't know, nick, you want to reinvent the wheel? No. To me, this is a big yikes. Look, there's so much control that you have there's only so much control you have in your role as a UX researcher, product manager. We mentioned they make decisions, you make recommendations. If they make the decision to go forward with no search bar on this thing, depending on the domain, it could be really damaging. But if you truly have no control, you can always let them shoot themselves in the foot. The thing I will advise for is if you do let them shoot themselves in the foot, put up a big argument for it beforehand, take metrics that back up that data beforehand, and then take metrics that back up that data after the change has been implemented. So that way you can say, hey, you remember this thing that we did? It's not working here, look, here's the before and after, because that will be really valuable to them, and they will sort of increase your value within the company, and you can show them, look, I accounted for this. It's a way to get ahead of that stuff. So that's my thoughts on it. Yeah, go ahead. I have one more thing, but this is business related. This is career related. This is the best advice I ever got. Well, there's two, but I can't mention them. They're too much cursing in it. Let's just say that. And this one isn't easy either. So I'm just going to say it. You can blip me up, but the thing that I got the best advice was write or document something along the sense of CMA. So this is what my boss told me. Whenever you have something like this put in writing. Create a CMA document. Whether it's an email. Whether it is an official letter. Whether it is some kind of record. That this is the advice you gave and this is the reasons why they disagreed with you or why they rejected your advice or even ignored it. I would explain the circumstances around it and I would put in there again your reasoning why you think this is a bad decision and then send it to everybody on the team. Literally everybody. This is what my old boss used to call the CMA folder. You have a folder in your email account. It's your CMA account in your CMA inbox, and that's it. And just in case you didn't know what CMA stands for, it's cover my app. There you go. Good advice. Speaking of one more thing, that's just part of the show where we talk about that's exactly it one more thing. Heidi, what's your one more thing this week? Hard to choose. I had to struggle. So between facial recording without consent at every store. Now at the checkout lane, between that and the wonderful interfaces of mortgage websites, I'm going to go ahead both bug me, but I'm going to go ahead. Keeping in line with the topics we had today with a facial recording. So here's the back story on that. So I noticed about a year ago, I think maybe it's already two years ago because I haven't been back to the store, I noticed that Walmart put up these cameras over each checkout lane and it just starts recording you. No consent, no opt out option, nothing. There's no sign anywhere in the store. There's nothing. You don't get advised that this is happening. And let me be clear, I'm not naive. I understand that there are security cameras everywhere, but this is a close up facial recording of you that can be used as, again, go back to data harvesting that can be used to render facial recognition on your person without your consent. And I think we are going too far at this point. Corporate companies have too much power because at the end of the day, they're forcing us into the situation because we can't get around it. We have to go to certain stores, right? So I don't know. That's my icky topic at this point. I can't even go to most stores anymore. I've stopped going to Walmart. I'm in the process of stopping, going to Target, which is going to break my heart, but I'm going back to ordering everything online and even they're collecting data on you in other ways, too, right? But if I can keep my face out of it, at least just for maybe a couple of years, just a couple of years more. But Heidi, to be fair, I did send you a consent form for this. You read it and guess relief. So I'm just saying I did. But there's the thing, right? I had an option. I had an option. I was given a choice. And in these situations, you're not given a choice. And if you walk into a store with the sunglasses on and your hat down and your hat on right now, you're like, suspicious. So, I don't know, for me, it's a no. Yes, it's scary. For me, I have been freaking out over something else so long time listeners know. My sort of election night ritual is to, I guess, stay up really late, look at weird towns and flyover countries in counties and fly over countries and say, oh, how many folks does this country, county or county need in order for this candidate to win? And I do that across all 50 states to see kind of how things are coming in live. Like, I start at 03:00 P.m. Pacific when the polls close and I go until like 03:00 A.m. Into the last dumps come through, right? It is a habit for me, it is ritualistic. I love it. It's part of the process. I have three different news sources up at any one given time. I'm refreshing on my phone is a whole thing. So election is next week. Next week also happens to be my wife's birthday and what she wanted to do this year is to go out to a cabin in the woods with no WiFi on election day. Good Lord. I will be out in the middle of the woods without WiFi, without cell service during election day. Thankfully, I've already voted. We've already voted, but it's going to be rough. There's apparently spotty patchy data. So it's going to be an interesting exercise to see how self restrained I am and how, I guess some neuropsychological conditions that I'm dealing with deal with this sort of situation where I don't necessarily have control over being able to check the results. We'll see no show next week. That's it for today, everyone. If you like this episode, enjoy some of the discussion about creepy AI. Maybe I'll encourage you to go. Yeah, I'll go listen to episode 249 where we talk about Google sentient AI. It doesn't really sentient. Go check it out. Comment wherever you're listening. What you think of the story this week. For more in depth discussion, you can always join us on our discord community. You can visit our official website, sign up for our newsletter. Stay up to date with all the latest team factors news. If you like what you hear, you want to support the show, there's a couple of ways you can do that. One, wherever you're watching, listening right now, you can leave us a five star review that helps other people find the show. Speaking of helping other people find the show, you can always let your friends know about us. Word of mouth is really how we grow. And three, if you have the financial means to do so, consider supporting us on Patreon. We do give back. Everyone on Patreon receives human factors minute. We give back to everyone who supports the show financially. As always, links to all of our socials are in the description of this episode. I think hiding your AUD for being on the show again today. Where can our visitors go and find you? If they want to talk about transitioning to a digital afterlife, better not me, first of all. But please find us on LinkedIn. Human Factors is my love topic to talk about. So find me on LinkedIn either heidi narrator you can DM me directly or HF UX research medical Driven Factors. We are on LinkedIn and we have an infa HF UX research and we have a Twitter HF UX research. So find us, come talk to us, and preferably not about people being on AI. As for me, I've been your host, Nick Rome. You can find me on our discord and across social media at Nick Rome. Thanks again for tuning into Human Factors. Cast. Until next time. Go boat. It depends.

 

Heidi MehrzadProfile Photo

Heidi Mehrzad

founder and ceo

Heidi is the founder and CEO of the medical human factors and usability consultancy HFUX Research, LLC, which specializes in medical device, technology, and combination product research, design, testing, and development. With a wide-ranging background as a trained pilot, emergency medical technician, software analyst, and human factors and usability expert within the (medical) product development industry, her motivation for the past 20 years has been directed towards enhancing human-product performance by optimizing user interface design, information architecture, and user and product workflow, through the application of human factors science and usability practices. She holds patents in GUI design for medical imaging and surgical navigation software systems, a BS in Aeronautics, and a MS in Human Factors and Systems, both from ERAU, as well as technical degrees in IT Mgmt. and Emergency Medical Services, from SHU and DSC.