This week on the show, we talk about how voting methods affect group decision-making . We also answer some questions from the community about managing layoff anxiety, competitor research making you want to work for other companies, and how many hours we spend on meetings per week.
Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on Rail Investigations - An interview with Becky Charles:
It Came From:
Let us know what you want to hear about next week!
Thank you to our Human Factors Cast Honorary Staff Patreons:
Human Factors Cast Socials:
Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.
You know what a wonderful night to podcast. We haven't done this in a couple of weeks, Barry, you and I. It's great to be back. It's great to be back. It's episode 264. That's how many of these things we've done. We're recording this live on November 17, 2022. This is human factors. Cast I'm Nick Rome and I alluded to it earlier. I'm here joined today by Mr. Berry Kirby. Hey, how are you doing? Hey, I'm doing well. I'm not in a hotel. You are very welcome. If you can see behind me the amazing decor, my wonderful lighting, and the high quality camera and setup that I've got here by the wonders of modern technology that has been held together by elastic bands and velcro. We're doing this. It's a good thing. Don't tell them. That's part of the secret sauce that goes into the show. Speaking of the show, we have a great one for you on the show tonight. We'll be talking about how voting methods affect group decision making. And later we'll answer some questions in the community about managing layoff anxiety. That's something that's in the news. Competitor research, making you want to work for other companies, and how many hours a week we spend on meetings per week. But first we have some programming notes. We were off last week. I was out in the middle of the woods and we're going to be off next week for Thanksgiving here in the States. So I will have a recast out there for you all. I meant to do that last week, but just didn't. So I'll pick something that's relevant to this story that we're talking about tonight and throw it out there next week for you all. So we're going to take off next week, but we'll be back on December 1, which will be a good old time. There's some really great stories out there that you can choose right now. If you go to our website, click on any one of our episodes, it's in the bottom part. You can vote on whatever store you want to hear about next week. But Barry, I'm curious what's going on at twelve two? It's been a minute since we checked in. It's been a while and I've actually got a new episode out there. So I interviewed Becky Charles, who's no stranger to podcast. She was part of our EHF 2020, but she is a raid investigator and so she gives a low down on what the real Accident Investigation branch does and what she does as an investigator and how humor Factors plays a key component into that. So, really interesting. Thoroughly recommend you go and download that and have a listen once you've finished listening to this. Yes, please listen to this first. Don't go leaving us cause we got a great news story for you. Let's go ahead and get into the news.
That's right, you all voted for this story. So Barry, what is the story this week and I would like to thank you all for doing it. This story this week is a new study shows how voting methods affect group decision making and so that when groups of people need to reach a decision, they will often take a poll to test opinions before the official vote. New research from the University of Washington shows that one specific voting method proved more effective than others denied identifying the best choice. In a recent study, researchers found that groups used multivoting in unofficial votes were 50% more likely to identify the correct opinion than those using there's always one plurality or ranked choice voting. multivoting gives people several votes to allocate across all options. The reality show American Idol uses multivoting, giving fans ten votes each. They can use all ten votes for their favorite contestant or split their votes amongst one or two. For this study, students were given ten votes to distribute amongst three choices. The researchers compared multivoting to other popular voting methods plurality and rank choice voting. Researchers found that found no evidence that discussions in the multi voting groups varied in any meaningful way from the other two voting conditions. Instead, the benefit of multivoting occurred before the discussion as students process the information more deeply and consider the intelligence more critically. While this may work in other contexts, the researchers don't believe it would work for political elections most because of how taxing it would be to allocate vote across a variety of options. So Nick, what are your thoughts? Would you like to be able to cast ten votes on this story or just one? A couple of things here. They say it would not be great for political elections and that's probably true, but we're probably going to talk at least something about political elections and I'm wondering if it's because the methodology is just a little too complex for something that should be free and fair to everybody. So I don't know. Also, the crazies would be out if everybody got ten votes. But yeah, I think this is really interesting to me from a psychology perspective because when thinking about voting methods, obviously it's a hot topic right now. There's a couple of ways. There's a big debate here in the States about do you do rank choice voting and when is it appropriate to do rank choice voting versus plurality and what are the benefits of each? And this is another method that just kind of takes kind of I don't know, I think the best of both of those worlds and kind of makes a case why it's such a good alternative for group decision making. But Barry, what are your initial sort of thoughts on this article? Well, I guess first I'd agree with you. It's difficult not to think of it in political voting terms purely because of the climate and the news that's going on at the moment. But I'm sure we'll get over that. We will talk about it in a lot of detail. I've got a minor concern about the methodology. If I'm honest, that when they've done it, they've only used three options to distribute ten votes. To my gut tells me that four or five would have given you a better flavor of how this works. Perhaps. I'm sure they've got some really good reasons behind the methodology, why they did what they did, and this is just purely a good feel. But it is gone back to that political thing. Having said, we shouldn't touch on it, but it does highlight actually the voting system that you use is really connected to the outcome that you want to get. So you've got to be able to understand the weight or the seriousness of the outcome that you want. So they mentioned American Idol, actually, the grand scheme of things. If your favorite singer doesn't win, is the world going to end well? Possibly not. So therefore the seriousness of it doesn't matter as much. However, if you're voting on something around your next political representative, then does the ambiguity there may be influence, not give it as much grounding as it should, but it also needs to cater. The voting system needs to cater for not just the majority, because you could argue that we could teach most people how to do this, but actually the voting system in political voting needs to cater for everybody. It has to be completely accessible, not just majority accessible. But I think it's really timely for us in the UK. Because just like we mentioned, American Idol, we're watching I'm a celebrity, get me out of here here in the UK. And that uses this system. Being said, you get five votes and you can vote for your favorite celebrity to do the eating disgusting parts of animals, trials and things like that as they're in the jungle in Australia. So that is a really useful outcome of it. The fundamental question I want to ask is, does this actually lead to better quality of outcome? Because again, the second part that kind of concerned me was this study seemed to insist that there is a correct outcome and therefore this got there. Whereas actually, when you talk about public opinion, even just in our human practice world, there is no right answer. There is just less wrong answers. Yeah, well, let's talk a little bit about let's talk about the methodology a little bit here because there is a correct answer in the methodology. And so what they did this, or the way that they did this, I should say, is sort of based on the pursuit teams developed by the Department of Homeland Security here in the States after 911. So the purpose was to sort of connect the findings of multiple intelligence agencies, track potential terrorist threats. And basically in the study they asked a bunch of different groups. I think they had 93 groups of undergrads kind of simulate the counterterrorism support teams identify which of the three suspects represented the greatest threat. And the students were given information about these terrorists, but no group member had all the information about any one subject. And so students had to share intelligence to correctly identify the biggest threat. So that's kind of how this happened. And in this case, there was a definitive answer. One of these suspects was in fact a terrorist in this study. And there could potentially be a lot of issues here if we look at that's not okay. Were they showing pictures? I don't know. I don't know. We're just reporting the details here, folks. And so who knows what kind of other things were going on there behind the scenes. What ultimately is I guess the important thing here is that they found that the correct answer by sharing information with each other, as you might do in a political campaign. There we go, bringing it back to politics. But also they shared that information together and tried to make a decision together, and everybody voted on who they thought with all that information ahead of time. So that's how this is working. And I can see this being useful in real life decision making when maybe there's some sort of impasse where no one can really come to an agreement in a plurality thing. And 51% seems a little too unfair for literally the other half of people. But there's a couple of different psychological things that are going on here. When you cast a vote for either one or multiple answers, I'll just call them answers. I don't want to call them candidates. But if you think about, like, paths forward, let's say, like you're in a business or something, and you're trying to make a business decision, that's a collective decision. You could do this for shareholders of stock, too, right, and give them there's a multitude of options. They could then select which of those options they want to give more weight to. Like, I really want this option. All ten points go to that one. Or I would like this option or this option, but definitely not this one. Then you do five, five and zero, right? And so there's a way to allocate those points in a way that makes sense for you based on what you want, but also if you share information with others about what's going to be most accurate. And so that's kind of the context here. There's a lot of different ways in which we can go. I'd like to touch on some of the real world just before you do that. The last bit around this, which I thought was truly interesting, was the fact that the vote was because you think that it would be the basis of the discussion, and then people waited their votes. But the fact that this vote happened before any discussion meant and they were getting the right answer before discussion meant that actually would you need to bother with the discussion. Just a vote and you're done. You just walk out, right? Yeah. And I just thought that was really, really interesting.
Okay. I don't know what that really what the impact of that is on what it is that we do. We don't bother with focus groups anymore. Just give everybody one of these wedding boats and we'll get the right answer. Okay. You're jumping into two different pools here. Okay. Sorry. It's okay. You're opening up a lot of doors, Barry. You're opening up so many doors. I'm going to open up the first door. Let's just rein it back. We'll get back to that performance piece. No, it's okay, because that's a really important, like, literally everything I have under there, under system, health and Safety is all that performance piece. But we'll get to that in just a second. So let's actually back up and talk about some other real world examples. So we already talked about the American Idol example, and you brought up the great example of the help me. What is it? The show. Oh, I'm a sympathy get me out of here. I'm a celebrity. Get me out of here. There's also other examples that we thought of before the show. One of them is like, prevalent in video games. You level up and you get a certain allocation of skill points or whatever, and you can put those in a certain in whichever way that you feel is going to be the best for you. And, you know, if the game is designed well, then it's going to give you a quote, unquote correct choice where it is tailored to your play style for assuming that you have the information that you need beforehand. A little bit of a loose example, but that's kind of the concept that we're talking here. You level up, you get five points, you get to allocate them however you like. Barry, you have this other example here. You want to talk about that one? Yeah. So I run a workshop, and this is I contributed to a workshop a bunch of years ago now, but we were looking at the blend of Live and synthetic training, particularly in the Air domain. And what we did was to give it real practical agency was to give each person ten Lego bricks. And then on our platform, we had a Lego base with three labels on it live, blended and synthetic. And they could vote with their ten Lego bricks where they wanted to see the balance of training happen and where the most value could be had. And so they could put, say, one on Live or ten on Live or basically the sort of voting we were talking about. But it had a real practical aspect because they could play with the Lego bricks whilst they were waiting and things like that, but then they would use it as their voting system. At the end each table, if every place there are ten bricks, each table then had a most preferred training solution which was using the system which was just really cool, meant we could take photos about it and show it round with other tables and get them to talk about it. That's a really cool example and when I saw Legos I was like what are you talking about here? But that's actually a really cool example of how this can be used in our practice as human factors practitioners, as UX folks. I think we can talk about that application in a second. But you did open another door, and I want to make sure that we talk about that because you were talking about the performance piece of this and how they did this before even discussing and it's basically just they're determining the right information before all that discussion happens. Let's talk about the performance of polling. So when you look at voting, plurality voting he's not just me. Yes, okay, got it. This is where voters this is like here in the states we do plurality voting where most voters select one option and it's most of the time used in political elections and this is just one person, one vote. That's it. Right? So in this case, 31% of plurality teams. So again, we're looking at about 100 teams, 31% of them chose the most threatening suspect in the final vote and about the same as if it were left up to chance. So that's interesting to me that is voting in plurality is essentially just leaving it up to chance in a lot of ways. In the unofficial vote, 6% of teams had a majority of members identify the correct suspect. That's less than the 11% that would have expected it by chance, which is just insane to me. What this is telling me is that plurality voting is really broken as hell caveat that because again, in the methodology not every person had the same information. So in theory on a normal election everyone should have we know this isn't true, but everybody should have the same information and working from the same knowledge base. So the bit about chance I think is right because that's what you'd expect if everybody really get a third of the knowledge OK, that's something for us is broken. Well, then let's talk about the alternative here, at least in the states, we're getting a lot of talk about ranked choice voting, which is being voted I think even Seattle here, just north of me, has voted on it recently too. And even my county too. It voted no on rank choice voting, but basically allows them to list their preferences from first to last. So you know, first choice is this my first choice is Barry, my second choice is Nick, my third choice is my cat, right or other way around sound about right? That's the way it should be done. Yeah. Anyway, so then if Barry doesn't get enough of the votes, or if it doesn't meet 50% or whatever, then the vote will go to Nick and Nick will then move up and get all the votes that Barry had, they'll move over to Nick and then if Nick doesn't get enough votes, then they'll do another round. And then all Nick's votes which contain all Barry's votes, will then go to my cat. And therefore my cat might actually win against the other person who had all the first choice votes. But this is supposed to be an approach to get the moderation of the middle point between all the candidates or all the options. It is, but it's also there to make you feel like your voters had weight. So with priority voting, I said it brilliant with priority vote, didn't say it twice. With that sort of voting, if your kind of doesn't win, then you've lost. And we certainly feel in this current day and age and this is not just politics, this is across the piece. If whatever you voted for does not win, then you feel like you personally lost. What this option does is allow yes, your first choice might not have won, but your second choice might have. Or if you get all the way down to the bottom and your vote had nothing to do with it, then maybe you just really bad at making choices. But the grand scheme of things, you're ranking so at least you can say well actually I preferred that one over the other. So it does give you as a person casting the vote, it gives more agency to make you feel in accordance with the outcome. Which is interesting. Right, we have this post, national post called the Police and Crime Commissioner and we use a version of it for that. And it's interesting seeing that working because I think people, whilst they don't necessarily agree with the position, they quiet voting system. Yeah, well, let's talk about the performance here. Apparently it didn't fare much better. In fact, 32% of teams identified the correct suspect in the unofficial vote. And so if you're looking at the 7% of those groups had a majority of members rank the right suspect. So comparing with 32% with 31% of the prior group. And they make the point there. We're surprised by the rank choice voting groups did not outperform the plurality groups. There's a lot of evidence, particularly in politics these days, that rank choice voting leads to outcomes that are more consistent with the preferences of the electorate rather than plurality voting does. That's why we've seen so many political elections move towards rank choice voting. But rank choice voting is generally better at revealing the true preferences of the people and not necessarily getting to the exact right answer. When people are making decisions at work, you're more concerned about getting it right than making sure it reveals what everybody thinks. And I think that's absolutely right, isn't it? I think that's a much better way of saying what I said at the beginning because when we're trying to do stuff and trying to get to an answer that they have to a definitive goal, that's not what voting does. Voting gives you an opinion and that's what these doing. So yeah, I completely agree with them. So the multivoting is where this gets interesting and this might be a result of the way that they looked at this, but the multivoting group started stronger with most members in the 30% of groups choosing the most threatening suspect. And what that means again for multi voting is that that suspect was one of their ten votes, right? So they now have more options to use and so with one of those, they have chosen it and in the final vote, 45% of teams identified the most threatening suspect. So we're up from the based on this one change of giving everybody multiple votes and allowing them to split them across multiple candidates and weight them appropriately. Am I getting this wrong? I could still flip a coin and have a better result. Yeah, I know, because we would get through. No, that's wrong because we're against three. It's three, isn't it? Not two. So I'll take that back. When you think about group decision making, it's probably a better option than either of the other two that's I think what we're trying to argue here, or that the articles are what they're arguing. And I think we're actually agreeing because it's interesting because again, in fact, this jumps nicely into the next article around the way we now perceive voting. Actually we've said in US politics, but actually I would say almost worldwide politics. Certainly it's the experience we have in the UK. How do people see polling in of itself? So opinion polls and things like that, how do we feel that they act and how do we react to them in our voting, in our voting activity? We think that polls are biased depending on where they come from and who's pushing them out there. And despite the methodologies behind them, largely most people are polling to publish their methodology about how they're doing and how they're waiting things, et cetera, et cetera. And I'm going to steal your point from this. If we use this sort of methodology in electing leaders, et cetera, et cetera, would we get better results? Yeah, or even have a better sense of what's going on in the world. Right, because there's sort of these straw polls that happened before the official voting poll, the vote is the ultimate poll, right, so you have these straw polls which like you said, are sort of some of them can be incredibly biased and some voting polls can be incredibly biased too. And then you also have these aggregators that are looking at all this information and trying to get it a more true picture of what's happening. Right? 538 is the one that comes to my mind and a lot of people have pointed out sort of gaps in their methodology as well. But over the last few political cycles here, there's been sort of a weird mismatch in polling error when it comes to understanding the general thoughts and opinions of the public. And so that's kind of the last point here is would this allow for us to understand a little bit better? Would this move us a little bit closer to that true understanding of the state of everything if we had this type of information or this type of poll when we collected it from somebody? I mean, it's a little bit more hard to get, it's a little bit more time intensive, but is it more accurate? But again, I think this goes back to what is accuracy? Because when we work in group decision making and things like that, we are still generally you never get into a workshop position where you know what the answer? Is, unless it's a study like what's being developed here, or there's no way of getting to the end of it and saying, oh, by the way, you're all right or you're all wrong. Well done, congratulations. We are still picking out people's opinions.
I guess there's an old adage and apologies to all my pilot friends that if you have a five pilot in a room, you'll get ten opinions. Maybe this, maybe this lends itself to that sort of thing a bit more. But I think used in the right circumstances, as I said, I've already used it in a particular example. It has value, it has agency to make it work and it allows people to particularly around those questions where there is not necessarily a simple right wrong option, not even the answer, but a right wrong option in your head. It's like, well, actually, I favor this one, but I can see how this option as well as a bit of value and I don't want to discount this option. So I'll put my one brick on there as well just to show that I've thought about it, I've considered it. So it does allow you to have your variety in what you're doing. I think the other sort of benefit to this approach, especially if you use it for selecting a design or something, selecting options, right, that's what we're doing. But I think the other sort of advantage would be to, I guess, account for those who take it more seriously than others. You might have that person like you, Barry, who sat down and said, okay, I'm going to give one brick over here and maybe three bricks over here and six bricks to this one and that's my ten. And you might just have somebody go, I just like this 110. And you know, they don't really put thought into the other ones. Like, this is the one that I like, I'm just going to stick with that one. For them, it's effectively a binary. But then you also might have the other people that are like, I like these two, I got five here, five here. I don't really have a preference. Those two are it. And so it counts for all three of these, like sorry, not just three, but it counts for a varying degree of how many points or how intense somebody is thinking about allocating points to these options. And so because it caters to multiple types of multiple levels of effort, we'll put it that way, I think this does make a lot of sense to use in a lot of cases because you can sort of break it down as much as you like. Well, also, I think when you're optionseering or decision making to be able to turn around, if you've got to make decisions as a group, particularly newly formed groups or groups that are not familiar working together, and you just need to get to some answers quickly because sometimes you just need to do that. This could easily be something that would allow people to have their feel like they've had their say, to highlight the breadth of decisions that possibly but still come to a decision that the group would be happy with going with it moving forward. Right. And I mean, there's a lot of talk about not designing by committee. Would this allow you to do that? I don't know. If we think about the correct option, would this allow for designing by committee? I'm just playing devil's advocate here, stirring up shit. I mean, that's a fair copy. I think it would be a structured way of doing it, wouldn't it? If you've got multiple options, and there is literally you've had the ability to lay out the pros and cons of, say you're doing a design for something, and there's four different ways of doing it, and there's no real that one might be financially better, the other might be designed better or whatever, and you can turn around. But they all have equal trade off. You actually have a good discussion around the table and say, okay, we need to come to a decision, weight them with your pros and however you want to do it, you just go for it. Do it. You got five people around the table, ten people around the table. You will come up with an answer as long as nobody puts sort of three bricks against it and you end up with a massive stalemate. But that's unlikely to happen. Right? Let's talk about that speed, accuracy trade off, though, because if you have a serious environment where you need thought to be put into it, right, I'm going to put one brick here, three bricks here, six bricks here. You have those people and they're going to think very carefully about how they allocate it, and that might be more accurate. So we're talking really about the speed, accuracy, trade off here between the two. If you have a binary choice, it's either this one or. That one. And then it can be very quick to make a choice on those if you're passionate about one option versus the other. But if you have this one, you're going to take a little bit more time, a little bit more meticulous about it, especially if it impacts you in your day to day or is going to reflect on you and your performance at work or whatever is going to ultimately be the consequences of these choices. And so I think ultimately do you think that I guess the slow speed required to contemplate this is appropriate for the level of accuracy that this article is suggesting that you would get? Part of me feels that actually this is where most of the value of this will come into it, in that if you're presented with the information and again, going back to their methodology of not having had discussion at this point, you're presented with the information that you know, because again, not everybody knew all the same information. So you go with the information that you know and actually that gives you this is adding agency to your gut feel, isn't it? So your gut feel is built upon your knowledge and experience and all that sort of stuff. So you know what you know, therefore you have the feel of it allows you to do things in handfuls. You're not going to think there of every single block, but you want to think of, well, most of our blocks, most of my votes are two thirds of my votes are going to go over here and then the other two, I'm going to put bits over here. Or I'm really positive about this, ten votes are going here. All right. I'm not sure I'm open to things for three votes. Three votes. Three votes. What we don't talk about here, actually having to say something doesn't cast one of their votes. Is that an option? So they don't cast all the votes. I'm assuming that's not an option, but when we talk about political voting, you can do that. You don't have to cast all of your votes. Yeah, I think just to get back at some of this article here, the thing that I guess was surprising to them is that like we keep mentioning here, I'm going to read a quote. The real discovery and the thing that we didn't expect was that multivoting groups would be more accurate before they discussed. This is getting at that point that you made, Barry. We just assumed they'd all be kind of equal before the discussion and that they'd improve at the end. If people have an option to say, I kind of like option A, I also kind of like option B, that might make them think more before they discuss, which then can help them make the proper decision. And so having that thought put into it before they even start discussing, I think, is why there's some of this improvement as well. But, yeah, I think there's a lot of practical use for us in our day to day. And I'm certainly going to use this if I can find a software that does it reliably and quickly and in a way that's easy to use. Because that's the other part, is how do you explain this to somebody in a quick and easy way without having to sit down and go, all right, so you have ten points, vote as many times as you'd like for the thing that you want. You can use all ten points. It just seems a little complicated for something that you want to get snappy feedback on. I guess it is about context, isn't it? Because we're talking about using in workshops and things like that. And I think if you've got people in the workshop, everybody is around the same, you've got the same outcomes, you've got the same desire to get the same outcome, you want some sort of decision. Therefore, when you turn around and say, you've got ten boats, you've got three options, distribute ten votes to how you see fit, most people would get behind that and do it again. We make a comparison to the political voting. There are lots of other threads, there are lots of other agendas going on. And I think if this was seen as vaguely complicated in any way, then that will then be used as a way of testing the legitimacy of the vote and things like that. I think educating people in a workshop environment would actually be probably a lot easier than we think it would be to use it in any sort of political sphere or any of that sort of element. I think you would struggle well, I'm thinking about this even beyond, well, politics related, but not like voting voting. So, like, my city council sends out emails every now and then. It says, give us your thoughts on this thing about town, and they'll say, what are your top choices for us to improve around town or whatever? And what are your top concerns? And they're not all equal, right? I mean, I might be more concerned about the playground down the street than I am about the homeless in the center of town, right? I still want housing for those people. I still want them to have a place to go. But my kid is going over here, right? So I might put like six votes at the park and four votes for homeless people down the way, right? But just because I put it second, I don't want you to think that it's like they're still getting my other four votes and everything else on that list doesn't matter. So I don't know. There are ways in which I can see this being useful if it's sent out to people and they use it in a way that still has impact in their everyday lives, but don't necessarily it's not like political, right? I think there's a way to do this, especially for gathering feedback on designs or user preferences or anything like that. You send something out and you get people to respond with this is the most important feature to me and I'm going to put all ten on this because this is the one thing that's missing from this product for me. And that allows you to prioritize on the back end a lot easier when you have that disparity in votes. When you see, oh, there's 400 points for this one, that means that of the 1000 people that we sampled, many of them, but that is their number one and the next one is 200 points and it's by half. So that's where I'm thinking this might be useful, is like that prioritization of feature sets, prioritization of wants and needs for a community, that type of thing. No, I think we coalescing around this idea that we've already got examples of where we have used it previously, that we like the methodology. We think it's got value, it's got agency. What this article does for me is actually highlight that it does have value and that, you know, even as using earlier on than I would possibly have done before, to use it as initial tasting the air as it were, then it has huge value and we'll get you fairly close to the right solution or the coalescing of opinion. I have a couple of loose rounds that I want to address before we get out of here. First thing, I would put all my votes into homeless camps. Just saying I wouldn't put them over the park around the way I use that as an example. But I want to make sure that people know that I would value the homeless camps over park. That's the first backtracking already. You've already changed your political agenda. I just don't want somebody to at me on Twitter, which is a twitch show right now. And I really hope that's next week's, I guess December 1 story, because there's a lot of stuff going on Twitter right now. The other point that I was going to make is that, yes, I'm going to take this and I'm going to use this. And I'm almost wondering if we use this like you said earlier on, in the process, you do it twice and you make your own study of it and say, look, and you show the people, the stakeholders and be like, look, we're going to take all your results early on. We're going to take those. We're not going to show you the results. Let's all talk. Let's do it again. Look, they're the same. Do we need to do this every time? Do we need to do this every time? And if you do that enough time, people will just be like, yeah, I guess you're right. We don't need to do this every time. You save so much time that way. Depending on the value of the conversation. Yeah, I guess that's true. Do you have any other loose rounds that you want to talk about before we get out of here? It's really interesting. It's nice to see some because we do often talk about, I guess, some really basic papers at times or articles that just seem to fill the gaps in our knowledge or put some evidence behind what we kind of knew already. Whereas this is actually the whole nuance behind that about when you time that vote, I think has been really enlightening and real game changer. So it's quite nice to have what seems like a really basic article that's actually got a bit nuance behind it that could change the way that we work. So good choices. That is refreshing. Thank you to our patrons this week for selecting our topic and thank you to our friends over at the University of Washington for a new story this week. If you want to follow along, we do post links to all the original articles on our weekly roundups in our blog. You can also join us on Discord, where we post those. You can do all the discussion on those stories and much more. We're going to take a quick break. We'll be back to see what's going on in the Human Factors community right after this. Human Factors Cast brings you the best in Human Factors news, interviews, conference coverage, and overall fun conversations into each and every episode we produce. But we can't do it without you. The human factors. Cast Network is 100% listenersupported. All the funds that go into running the show come from our listeners. Our patrons are our priority, and we want to ensure we're giving back to you for supporting us. Pledges start at just $1 per month and include rewards like access to our weekly Q and A hosts, personalized to, professional reviews, and Human Factors Minute, a Patreon only weekly podcast where the hosts break down unique, obscure and interesting Human Factors topics in just 1 minute. Patreon rewards are always evolving, so stop by Patreon.com Human Factors Cast to see what support level may be right for you. Thank you. And remember, it depends.
We love our patrons. Huge thank you, as always to our patrons. We especially want to thank our honorary Human Factors cast staff patron Michelle Tripp. Our Patreon crew really does support the show from the ground up. We spend so much time and resources and money out of our own pocket to make this thing happen, and whenever we get any level of support, it truly offsets some of that. And we genuinely can't thank you enough for all that stuff. This is the part of the show where I like to talk about some of the other stuff that we have going on. We have a website that you may or may not have visited. If you haven't visited lately, go check it out. We have all sorts of fun stuff over there and yes, it is our episodes. Those are up there. But with our episodes, we got a lot of sort of detailed show notes that you might not necessarily know. So we have links to any guests that were on that week, embedded YouTube videos. So you can see how handsome Mr. Barry is over here. And if you're regularly an audio listener, you can kind of see our faces and if our voices match up to those faces. But beyond that, I mean, there's a lot of really cool things that we do on our website that you might not be aware of. We do news roundups every week. I mention those every week on the show, but I want to specifically call it out here. We do a weekly roundup of news around the Human Factors world and so we kind of break it down top stories and just like Human Factors News, which is kind of one liners, but you can check them out. We put the links all there for you. It's a collective resource for you all. We do those monthly as well. We also have deep dives on our website. So how Human Factors in the Olympics go together or deep dives into some of the episodes that we've actually had on the show. We go much more into depth on the website. We have guides and reviews coming out very soon, as I said, info on guests that have been on the show. So all of our HFPS coverage has a ton of information about our guests. You can go find more about them. There ways to submit your own stories. If you are a researcher, a student who has a story and Human Factors that you want to submit to us, we automatically take all the stuff out of paywalled content out of our sources because we want it to be free and accessible to everybody. And so if you have something that we can share that is perhaps behind a paywall, let us know. We'd be more than happy to do that for you. You can also search our entire catalog of episodes, videos, and our short form format, short form content. So we have all the YouTube shorts, the Instagram reels, whatever you call them, TikToks, whatever the kids are calling them these days. We have those all searchable on our platform. If you want to search all that there, it's cool. We got conference recast on there. We got a ton of stuff. That's what I'm trying to say here. It's been a minute since you checked out the website. Go take a look. Humanfactorscast media. That's it. So go check it out. Why don't we get into this next part of the show we like to call It Came From.
All right. It came from this is where we search all over the Internet to bring you topics the community is talking about. If you find any of these answers useful, give us a little like wherever you're watching to help other people find this stuff. We got three tonight. The first one here is by Ctrane 4997 on the UX Research subreddit. How are you all managing layoff anxiety right now? I've been laid off once. It was scarring hearing all this news. How are you guys managing your layoff anxiety? Barry, they all been reinhardt now. Isn't that part of the cycle? But anyway, it's not a nice period of time. I've been in that situation where you're in, you know, there's going to be a late layoff happening and I've been laid off myself. But you've got to look on the positive side of it and that's all you can do. You can't just sit and try and keep that grip on that job. Get out, start looking at the positive side of things. I was absolutely gutted when it happened to me, for a whole bunch of reasons, but genuinely was one of the best things that ever happened. It sprung bored me into actually doing the same domain, different company, and was just blew my career up in a really positive way. Nick, what about you? Have you been in this position at all? Thankfully have not. Knock on wood. With everything going on right now, there's a couple of commonsense things I think that you can be doing right now. You can sort of use this as an opportunity to update your resume so that way you're ready to go. Maybe spend a little extra time on the weekends polishing that thing up because it's been a while since you touched it. You know what, even if you're not afraid of a layoff, why don't you press that whole thing out and dust it off a little bit? And then you can also start saving a little bit, maybe tightening down your spending. Those are some common sense things that you can do that off the top of my head, at least, that if you're expecting some sort of change in climate with uncertainty, you can start doing those. But I mean, also you could start looking if you want to get ahead of it, you could start looking if you have sort of the hubris of, well, I'm not going to get laid off and they're never going to get rid of me. Maybe rethink that attitude. I don't know. I don't know. Yeah, but actually, just to follow up, if you're that close to it happening, don't jump before you pushed, because if you leave, then you miss out on any sort of payout. Yes, go and look for something else, but don't jump. Yeah. All right, next one up here from user experience. Subreddit this one's by BJJ john I think that's right. That rolls up the top. John does competitor research. Just this one's funny to me. Sorry, this one is hilarious to me. I don't know. Okay. Does competitor research just end with the conclusion that you want to work for them? Over the last years, I've done competitor research, I can't help but think at the end of the process how awesome my competitor is. No matter the problem, product or experience, I'm left with the overall feeling that I'm not at the best company. Does anyone else feel like this? Has anyone else jumped ship to work for the best in class competitor? Nope. Generally. I mean, my girlfriends was there, clearly. I guess when doing competitor analysis, I have looked and go on, oh, I like the way that they do something. And I might have taken the idea and rolled it into what I do, I guess. But no, hand on heart, I can say I've never sat down and said, oh, because they're doing this, I really want to go and work with them. There must be a time I've done something I obviously don't think that has. Nick, if you've been in a situation where you've just gone, wow, they're amazing. Okay, so not in a professional manner, not like a competitive analysis that you do at work of other products that are closely related to the thing that you're working on. I've never gone, Ah, I should be working there because, like you said, beg, borrow, and steal the parts and pieces that you feel is going to be best for your product. I have done sort of the basically the everyday interactions, like, oh, they're doing something cool. I'd like a job there. That'd be kind of cool. But I haven't done not with a competitive analysis, no. This one just tickles me a little bit. I've generally just listed after companies that offer me a job. I jump at the job and actually, long term listeners to this podcast will know exactly which company I'm talking about. But yeah, not doing actual work. Yeah, exactly. So no. Next one here is, again, the User experience. Separate it by litino. They write, how many hours do you spend on meetings in one week, on average? As a remote UX product designer, I'm going to add in research or human factors engineer. It's all the same here. Not all the same, but it all counts. Barry, in a week, how many hours do you spend on meetings in a remote setting? In a remote? Yeah. If I'm working 37 hours a week, which is kind of a sort of standard, I definitely do a lot more than that. I would guess at least 15. I would say 15 up to about half the time I'm probably on remote meetings, particularly now after postcovert. Yeah, I think that would be about right. Yeah. I don't want to put a number to this because it depends for me. There it is, right there's, the button. So it will depend on which phase of research that you're in. If you are conducting user interviews, it's going to be like halftime. It's going to be even higher, potentially 75% of your time. In a 40 hours week, you're on calls for 30 hours of them on an analysis week, it could be the opposite. You could have like 5 hours of meetings, a meeting a day or something. It just depends. I would say on average it's probably where you said, Barry, anywhere in the 15 to 20 hours range. Maybe a little bit less some weeks, I don't know. 15 is good. 15 is a good number. The only reason I've got a slightly decent answer to that at the moment is because I've been doing some resource planning for myself and what I've had to do to deal with the ebb and flow of what we do. I kind of think that we work in three month cycles in terms of the way the projects run. And so I averaged that out over a three month cycle to get roughly what my utilization is. So we're talking about it on a week on week basis. As you say, it's really difficult. But fine. If you want to look what it is, look a bit broader than that and see where your true work cycle flows. Yeah, that's a good point. I know some software will tell you as well. Google Calendar, I think, tells you, and I think Microsoft team tells you, hey, you spent this many hours on work last week or in meetings last week. So they're sensitive to that and they want you to work more and talk less is ultimately what that's coming down to. All right, getting into this last part of the show, it's just one more thing. One more thing. Barry, what's your one more thing? I'm so shocked that we've made it to the end of this and the technology sell together. Mind, one more thing this week has been we mentioned Twitter earlier on. I see different people are looking at alternatives and I trial mastered on this week is a Federated social media platform which all the, as they call it, all the lefties are going to. And I found it incredibly difficult to use the idea of you have to do so much pre prep to get any sort of meaningful newsfeed. I found it really difficult. So I think I'm missing a trick somewhere. I'm clearly not using it properly, but for the idea or any social media feed has not just any social media as a main feed that has stuff that the algorithm rhythms push away. What they're trying to do here is trying to get you to almost develop your own algorithm by doing these lectures. But it's so difficult to get into. So I would encourage anybody else to go and have a look at it and see whether I'm just being completely moronic over it and just missing something simple. But for being the next big thing, I've got a long way to go. Yes. Anyway, that's fine. What about you Nick? What's your one more thing? Well, I was mentioning last week not last week. Last week was the thing. The week before I was going to be in the woods alone with my family, with no internet during election night. And would you like to guess what happened? I got a little signal, just enough to refresh Twitter every couple of minutes to see what was coming in. Couldn't stream video, but I could certainly sit there and refresh every five minutes, and that's what I did. My wife and son went to bed super early because it was a long day of driving and, you know, so I just sat there with the fireplace on and in the dark with my Twitter feed on, just refreshing in the middle of the woods. That was me. And it was definitely a different experience for election night than I'm used to. Like I said, I usually have up my three headed monitor display with something over here, something over here, something there, TV on, streaming phone on, different resources, just going, going, trying to figure out the latest. It still worked out okay. But yeah, it was harder to track the weird counties, but there are still some races that are super close that are still potentially coming in. Anyway, cool. That's it for today, everyone. If you liked this episode and enjoy some of the discussion about decisionmaking, I'll encourage you all to go listen to episode 242, where we talk about how top reviews can help sway consumer decisions. We'll actually throw that as the recast for next week, so stay tuned for that there you go comment. Wherever you're listening, what you think of the story this week. For more in depth discussion, you can always join us on our discord community. Visit our official website. Like I mentioned, tons of cool stuff over there. Sign up for our newsletter. Stay up to date with all the latest even factors, news you like, what you hear, you want to support the show? There's a couple of ways you can do that one, thank you. Just keep listening. That's what you can do. You can always leave us a five star review that actually helps out with some of the folks looking for other shows to find. Gave us a five star review that will help out. Tell your friends about us. That is something that will really help the show and that is free for you to do. Hey, have you heard about human factors? Cast It is a cool podcast where they talk about things like voting and, I don't know, some other cool stuff. Robots, AI, decision making, it's all there. It's a cool show. Tell your friends. And if you have the financial means, you can always support us on Patreon. We'll never say no to your money and we will appreciate you forever. As always, links to all of our socials and the website are in the description of this episode. Mr. Barry Kirby, thank you for being on the show today. Where can our listeners go and find you if they want to talk about your really conservative views on politics. Yeah, well, you can always find my really conservative right wing views on Twitter at basin. To Skull K, you can find your masters as well, but I've got no idea how. But if you want to listen to some in depth interviews with players in the field of human factors in and around the domain, then come on to my podcast. Twelve two humanfactors podcast, 22 Podcast.com. I may have pushed him too hard, folks. He's going to be on Truth Social by next week. As for me, I've been your host, Nick Rome. You can find me on her discord and across social media at Nick Rome. Thanks again for tuning into Humanfactors Cast. Next time, it depends.