Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
July 22, 2022

E252 - The future of wearables is...socks?

This week on the show, we talk about how smart textiles can sense how their users are moving. We also answer some questions from the community about red flags in job descriptions, which part of HF and UX we’re bad at, and difficulties recruiting candidates for interviews.

Recorded live on July 21st, 2022, hosted by Nick Roome with Barry Kirby.

Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on RAF Safety and Just Culture:

 

News:

 

It Came From:

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

Transcript

 

Welcome to Human Factors Cast, your weekly podcast for human Factors psychology and design.

 

 

Hello. What's going on, everybody? This is episode 250 52. We're recording this live on July 21, 2022. This is human factors. Cast I'm your host, Nick Rome. I'm joined today by Mr. Barry Kirby. Hello there. Okay, Obiwan Kenobi. Apparently I'm sitting across the way from Obiwan Kenobi. This week on the show, we're going to be talking about how smart textiles can sense how their users are moving. We're also answering some questions from the community about red flags and job descriptions, which part of Human Factors and UX we're bad at, and difficulties recruiting candidates for user interviews. But first, hey, next week, everybody, we're going to be off. We got some stuff that is going on. Barry's going on vacation. I'm just taking a break. But it's just a break from our normal programming. We are still going to have something great for you in this place. I had the chance to sit down with Joe Keebler, who's the chair of the International Symposium on Human Factors and Ergonomics in Healthcare to talk about the conference this year. But we switched up our normal conference coverage a little bit to include some great discussion about the healthcare field as a whole and where we could go in the future. And, man, I think we got way out there. At one point, robot surgeons on long distance space flight was brought up. So, I mean, that's how far out there we kind of got. But, yeah, it's a great discussion, and I'm really looking forward to that hitting the airwaves. We really like Joe. He's been on the show before. Funny enough, I've never actually talked to him on the show. He's been on the show, but it was when the lease went out as our field correspondent, and I think we only talked one other time before this. Anyway, so we'll be back, though, with our regular episodes on August 4, so check your feeds for that interview. Great. Barry, what's going on with twelve two? So twelve two? Eventually we got the episode up because I had a slight technical inability to actually upload the audio, which is a problem with podcasting. But once I uploaded the audio about ten minutes later, then we got the show where I talked to the Ref Safety Center. So the Royal Air Force over here in the UK, they have a safety center. And we spoke about just culture, which is not a new thing, but it's certainly becoming more popular about how organizations deal with blame and how people feel about reporting. So the RAF has been going through this cultural evolution, you could say, over around a decade or so. And so we've got an insight on what they've been doing around that and how they've done that, but also how they treat and deal with human factors and human factors training as a whole, and how it positively affected the organization so that's life now. And go and have a listening and let us know what you think. Yes, but you're all here for the news, so let's get into it.

 

 

That's right. This is the part of the show all about human factors news. Barry Kirby what's the story this week? The story this week is smart textile sense how their users are moving. So MIT researchers have used smart textiles that snugly conform to the body so they can sense the wearer's, posture and motions. Through the use of these smart shoes and a compression mat, machine learning algorithms can measure and interpret data from pressure sensors in real time. By incorporating a special type of plastic yarn and using heat to slightly melt it, the researchers were able to greatly improve the precision of pressure sensors woven into the multilayered knit textiles, which they called 3D knits. The multilayered knit textile is composed of two layers of conductive yarn yarn knit sandwiched around a piezoresistive knit, which changes its resistance when squeezed. The machine is used to stitch this functional yarn through the textile in horizontal and vertical rows. Where the functional fibers intersect, they create a pressure sensor. The pressure sensor data is then displayed as a heat map. Those images are fed to a machine learning model, which is then trained to detect the posture, pose, or motion of the user. Based on the heat map image, the machine learning system was then able to predict motions and yoga poses performed by individual standing on the smart textile mat with 99% accuracy. Now that the researchers have demonstrated the success of their fabrication technique, they plan to refine the circuit and machine learning model. So, Nick, what are your thoughts on our socks being smarter than we are? This is one of those stories where I looked at it and went, yeah, we have plenty to talk about. And then it was chosen. And I go, what are we going to talk about? Because in some ways it is very simple. They've created socks that sense the wearer's intent to move, and that seems like a very simple thing. But then when you start thinking about the applications and where this could potentially play in, it becomes a lot more interesting. And that was kind of my thought process over the course of the day, was or over the course of the week as I selected the story as a potential thing, throw it in there, see what comes back. It came back and then I was like, but what? And then we explored it a little bit now we're going to explore it now. So I don't know. I don't know how I feel about this. Barry, I need to talk to you today about how I feel about this story. How do you feel about this? Well, I'm always here to talk about your feelings, Nick, and how you feel about sorry. So it's a good job I'm here. So for me, they do acknowledge it in the article as well. This is not new. This is stuff that I mean, I've always had an interest in wearable technologies and this is truly wearable technology. And I remember going to probably one of the first conferences on this topic around 20 years ago when they were talking about true wearables, but we were looking at fabrics with lights on them. So it was fashion based technology. I think in many ways it was like, how can you there was some interesting things like having a heat map on your body. So as you got hotter, then the LEDs went from sort of green to red in a sort of a network all of your body, which in theory was a good idea, but then you got the practicalities of it and not so good, but all these sort of senses. But what I think they've done here is really interesting because as it's been in development over all these years, the senses have always been a clunky add on. The senses has always been something new in that respect. So for this for them to have actually truly integrated into it, it's not so much, I think, about what they've done with the socks and what they've done with yoga, but actually, what potential is this now going to unleash? What are the future applications of this that it could be because we'll talk a bit later about they haven't just done it in socks, they've done it into different textiles and things. And that's where I think the exciting bit is about it. So it's a springboard for me. I think it's going to be I think we should have quite a good discussion. As long as you remember to switch your mic on. Yeah, Mike is on, I'm good to go. Let's start talking about it because I think the first thing I'm not going to go in order of our show notes because that's responsible and I'm clearly off the deep end already. So here's the thing I want to go to the first place in which I thought about this. I think the first thought I had in terms of application, really, it was aviation. So the thing for me is we are controlling rudder with your feet. And I guess this is applicable and shortly after my thought went to surface transportation as well, you're controlling things with your feet. And if these wearables can detect your movements and detect your intent before you even do it, based on like, I don't know, micro movements of your muscles in your feet, it can then adjust the system to accommodate for what it thinks you're going to do. So, for example, let's say you're driving let's use the surface transportation example because that's a lot more familiar to a lot more people. Let's say you're driving down the road and you look ahead of you and you see a car stopped and so you need to take your foot off of the brakes and onto the gas. Now, the car can understand when you take your foot off the break or sorry, off the gas, but it can't understand that you're moving over to the brake and it's a confirmation then for the system that you are actually intending to break, and it might actually break earlier if it detects that you're moving that way. And so, I don't know, there's some like, really small improvements that we can make and really when we're talking about something like surface transportation, milliseconds matter when it comes to reaction time. And so if you see a ball, you know, coming down the way and there's a child in the street and you lift your foot off the gas and go over to the brake, if it can automatically break before you even get to that pedal. And let's say this is a hypothetical situation where there's no the sensors on board in a high tech car are not picking up things in the environment that you are as a driver. Right? That's kind of the scenario here. So that's kind of my first thought of where this application could go. And I mean, everyone wearing these socks while they drive is a little far fetched, but think about that type of application where maybe you have defense applications where they are maneuvering different vehicles and you have something very similar going on. Barry, where do you want to go from here? The sort of thing that hit me first was actually around the training application because I've done lots of things in the past where you have to learn what it is that you're doing and repeat the action time and time again. So I used to do a lot of rifle shooting, but it's always very hard to coach somebody in rifle shooting when you don't truly know. You can coach people and tell them what to do and what you feel, and hopefully they'll feel similar things. But at least with something like this, again, it's the springboard stuff, where we don't just have them in socks. You have almost like throughout a uniform or throughout whatever you're wearing to do these sort of sports, I mean, good sport to begin with, so you can use it for that, to make sure that you got personal reference that you're doing the same thing. And again, so if you're lining the prone position that you are in the same shape that you always are and things like that, so you can get that confirmation stuff. So that was the first bit, and then I sort of sprung onto lots of spring body type in terms of body movement because when you're doing sort of high performance athletics, which clearly I do a lot of, it's about how far can you push your body before it's unsafe. So if you're stretching, you may be doing lifting if you got by your measurements going on through, because it talks about machine learning and the use of AI and things like that. In terms of the platform, could it give you feedback on to actually with the movements you're doing, you're going to strain a muscle or do yourself some damage and give you earlier feedback than normally you can feel like say you pulled a muscle or you got an ache or something. If you could give you preemptive warning before that point. So you actually know, actually I was about to push myself too far and you've saved your muscles, you've saved whatever, then that could be really powerful. But then also the flip side of that is what it could also show you when you haven't done enough. If you've got an ability to say you're stretching off or you're again lifting something or doing some sort of endurance if through the sensors it knows that actually you could push yourself that little bit harder. You've got the capability to do so then I could give you that positive feedback of actually no. You can go a bit further. There is obviously problem then when he gets that rob and you do end up pulling a muscle, but there is obviously a lot of work to do in that area around that type of thing. And the last bit that I'll steal before throwing it back towards you is kind of what you were saying around how you use in cars, how users in aviation. Similar could be said for simulation. When you're using simulators to do training and things like that, you could actually use these. So you don't need maybe all the pedals and stuff that you would in a high fidelity simulator could just have it just being able to do the movements and being able to detect the movements beat into a simulation. So you don't need to spend as much money on say, a cockpit simulator and things like that. You can actually do more around a more medium or low fidelity simulator that way because you're recording the movements anyway, you can get the intent of what they were trying to do and feed it into that sort of thing. So that's my sort of piece around training. Where would you like to take it next? Well, I want to comment on the sort of training aspect of body movement and endurance because I actually put this under another part of our notes here in terms of personal fitness and I think it's one of those things where it can cross over many different directions. And so the fact that we both came up with these applications without looking at each other's notes is actually really impressive. So I actually wrote down improving form when doing things that involve feet for weight training, right? So if your feet could be further apart or further together this is personal fitness, this isn't necessarily training for infantry or anything like that. I just think that there's potential applications there. There's absolutely personalized recommendations based on the way that you're carrying yourself, these socks would be able to understand what pressure is being put on your body at that time and whether or not you are maintaining good form based on your foot placement and all that stuff. I had a lot of the same points and I thought that was funny that we both came up with that. And so I guess the next place that I want to go is kind of along the lines of when you said simulation and for me, big VR guy over here, I'm thinking virtual environments, they actually might. So one constant issue with virtual environments is the processing power required to render these environments in a high enough fidelity or high fidelity, I shouldn't say high enough fidelity, but a high fidelity render of these environments because you are looking through a device that doesn't know where you're going to look next. But if you have that tied into some wearables on your feet that could sense when based on micro muscle movements, when you were about to pick up your foot and move in a different direction, it might be able to help the computer processing power render that a little bit better and assist with some of that latency that often contributes to things like motion sickness. And so that to me is a huge thing that could potentially be improved, I guess, by this type of technology. And it links back to training. A lot of training is done in virtual environments through VR. And so when you have that sort of reduction in latency and less people experiencing this sort of motion sickness, you're going to be able to move more people into VR and present them with novel situations that they may or may not be able to train for in real life because they are dangerous or unsafe or any other adjective that you want to use about those. And so I think that's another really great application. I'm going to pass it over to you because I feel like you have something to say about that. Yeah, I was going to say, I mean, that certainly could help with the whole County Valley situation, couldn't it? But also really with augmented reality. When you look at augmented reality applications, this is something to do with the anticipatory processing. We might have just come up with a term there that might be we should copyright that or something. Anticipatory processing. I like it. But the idea that if you've been able to allow the AR to process quicker and know what's going on, then you'll deliver a smoother outcome. Absolutely. I think that's an inspired application of this. And like I said, that tie up between that and training, I think the application of this in training in terms of being good form, good function, absolutely. But also we've got that rehabilitation side. The one that came to me was if people who are looking to engage after significant injury, again going for the whole application rather than having to have a physio there all the time. Can it give you the feedback? Because some people just want to go and do it on their own. Can you give you that feedback to play with that game? I think there is a lot there. Have you got anything more around the healthcare application that excites you? Yeah, we'll pull in a couple from the article here. Right. So you mentioned rehabilitation. You could track the gate of somebody who is learning to walk again after an injury. Socks could monitor pressure on a diabetic person's foot to prevent formation of ulcers. They also talk about some of the accuracy with these 3D nets and how it could make them useful for applications like prosthetics where precision is essential and being able to control those prosthetics. A smart textile liner could measure the pressure of prosthetic limb places on the socket, enabling prosthesis to easily see how well the device fits with fitting and adjustment on these prosthetics. It could really help, but I also feel like this would potentially help with something like proprioception down the line. If the prosthetic itself is understanding the intent based on where you're moving it, then yeah, it could definitely have some impact there. The last thing I think here from the article that I'll bring up in terms of healthcare is sort of this type of thinking around being able to apply it to this domain will really enhance injury prevention. It's weird to say it that way, but it will reduce injuries and it will enhance detection techniques to help evaluate some of this direct rehabilitation. So I think there's some really good healthcare applications and it does really excite me in terms of the different domains. Like just in this, I guess, couple of minutes that we've been talking. We've already talked on healthcare, we've talked on training, we've talked on virtual environments, surface transportation, aviation, defense. There's so many things that we've already touched on based on this little sock. It's a sock. It is because I want to bounce to somewhere else. I want to look at health and safety. Because one of the things that we talk about a lot is health in the workplace, safety in the workplace and that type of thing. Is this something that could be applied to help you? I was thinking about hazardous environments, particularly where you put fences up to say where you're not going to pass because the machine is in use or something like that. Could this actually help with that type of thing? So if it recognizes that you're about to step into an excluded area, for example, somebody might not have put the gate up properly or the fence, but actually it's recognized as almost a geofenced area, but to a high level of precision. Could wearing this sock, the sock recognizes it because the sensitive is going to pass through this geo fenced area. That should be a safety area, but it could actually then send an alert. I mean, I guess it could send that sort of electric shocks into the feet, but that's a different application. But it could send it to your watch or to your device to say you're about to cross across safety threshold. I think there is that sort of element, but also, if you're not wearing the appropriate footwear, can your socks sense that you're wearing the appropriate steel toecap boots or that type of thing? So actually helping you with your PPE, make sure you're doing the right thing for the task that you're going to go and do? I just think there is so much application there. Again, it's springbox. They weren't talking about that in the actual article itself. It's more about what we can do with it to take it forward. Do you think you could wear them? And do you think it might get annoying to wear clothing that's telling you you're not wearing the right footwear? I don't know. It depends on there it is. It depends. Sorry, I'm so distracted by that now that I've lost my thought. I think really, though, it does depend on the application and what the benefits are from wearing it all day. I can see a situation where if it, I don't know, saves my life times a day because it senses I'm about to step on one of my son's Hot Wheels, then, yeah, I wouldn't mind wearing these all day. I think if I was working in a hazardous environment, I would definitely wear them. Talk about PPE. This is like another piece of PPE that you would have to wear because it's almost like an

 

 

anticipatory sensor or something. So, in a way, it's almost like a 6th sense that you're wearing that you're putting on your feet. That six sense is then processed by an artificial intelligent machine learning system that can tell what your intent is based on your proprioception, based on your movement. It can tell more about your appropriation than you can, is what I'm trying to say there. And that's quite remarkable, I think there's yes, go ahead. I guess there is that bid as well as if you can put it on your feet and make a sock out of it. You can make a living out of it. Yeah. Therefore, you put your hand into things that you shouldn't be putting your hands into. We're talking about stocks here. I know, but yeah, I think it is one of these things. It's like what we've done on other episodes when we sat there and got actually, we're not entirely sure whether we could talk about this for a significant period of time, but actually, when you start looking at the application where you can play with just something as simple as this, I think it's quite cool. And I thought, I'd see socks as cool as this. These are the socks that you want in your Christmas stocking right the stocking, inside the stock a bit that we haven't covered is we were assuming here that they all work, right? What is going to go wrong and you start getting tough data because the way obviously works. They sort of highlight it's, a mesh and all that sort of stuff, if we are going to start using them, because we now basically going to sell them into the safe critical industries as a piece of PPE, what happens if that sort of stuff goes wrong and it starts giving you cues that are wrong? Even if you're an athlete and starts pulling out weird data, how do you bring out the or how do you warn the fact that it's not great? I guess it would have to be through the device, wouldn't it? There's no other way of really doing it. You certainly don't want to give you electric shocks or buzzes when it's not working properly. Yeah, you're right, what happens when it gets it wrong. But I do want to talk briefly about sort of the accuracy, because they do comment on this, right? They say that once a model was trained, and they do training on the model for each individual wearer. So that is kind of one limitation. Right now, it's still in a sentencing and it's got to be tailored towards the individual, but once it is trained, it could classify the users activity on the Smart Met. And that's another thing that we're talking about here, is it has to be in conjunction with this mat. There's a separate mat which you could build into certain interfaces, right? Like some of the things that we've talked about, just taking a quick aside here, but you could build it into things like petals, or you could build it into things like a space around a machine that you don't want to sort of interact with. You could also build it into like, weightlifting mats. Right, so I'm bringing it back to some of the topics that we've talked about, because that's an important contextual thing that I don't think we mentioned. So there is a mat, but once the model was trained, it could classify the user's activity on the Smart mat. So whether they're walking, running, doing push ups, etc. Or with 99.6% accuracy, and could recognize seven different yoga poses with 97.7% accuracy. Thinking about the data that goes into this, it's just feet and it's just a mat that is looking at this. And so when I look at what goes wrong, that is a .4% for those first four things. Walking, running, doing push ups and then the yoga poses is like 1.3%. It's a low percentage. Yes, we do have to figure out what happens when it goes wrong and what the steps are to fix those issues in a mission critical environment. But that high level of accuracy is really striking. Yeah. It's striking to me like that is a high level of accuracy and we are just talking about this from the perspective of this small ecosystem. The sock and the mat. When you combine that with other pieces of technology that could potentially understand a little bit more about what's going on in an environment such as external cameras that are monitoring a situation. Also using machine learning to understand user intent. When you pair it with other things like. I don't know. Even something as simple as a smartwatch that is taking things like heart rate monitoring. Albeit less accurate than something else. When they're doing something like heart rate monitoring, activity tracking, when you pair it, you get a much more clear picture. And so I think we're talking about this in a vacuum, but when you start to combine it with other pieces of technology and other solutions, you really get some good coverage and some good applications. I don't think we've even broken the surface on yet. No, I think you're absolutely right. I think there is this whole idea, and they do say that in the article, that they've been exploring more collaborative applications. So in collaboration with a sound designer and a contemporary dancer, they developed the smart textile carpet that drives musical notes and soundscapes based on the dancer steps to explore bi directional relationship between music and choreography. So really what that's doing there is taking movement and pairing it with another input output. So in this case sound and that type thing, but just the fact that you're interacting with your environment in a different way, it's just fascinating. And you could see with other applications you could take what you quite rightly say is theory, quite a simple technology, although quite complex to build and make sure it works, but it is still simple nonetheless of how we can use this and make our environment more interactive. And your point is a really good one and the fact that this on its own possibly has limited value. But combining with other basic. With the rich data set that we got nowadays. Because we do have cameras out there. We're all wearing smartphones. Watches. All that sort of stuff. Which kind of leads me to one of my last points here. Is there is a lot of data there and as with a lot of these papers that we talk about. It's still in its infancy. Et cetera. Et cetera. But where and how is all this data processed and is there an opportunity there for how you design all this? We'll need to take into account that all this data is going to be there and it needs to be time synchronized and all that sort of good stuff. But is there an aspect here where what could people learn about you if they were to steal this data? Because there's a lot of people talking at the moment around or have been talking around, a lot of people wearing fitbits and heart rate monitors and all that sort of stuff and putting all that. Data out into the cloud on the cloud servers, which is all well and good, but then all these companies are then using that data anonymized granted, but they're then able to use all their vast data sets in order to make computational leaps. And so again, where would this data be stored, how would it be used? And how would people see with data privacy? Could something what's the unintended consequence? I don't think that's necessary. It's not a block anyway. But I do think it is something that we probably need to think about. Yeah, I agree. And I think that introduces a whole separate cybersecurity threat to this type of technology, especially when you're accessing sort of this machine learning AI database that is predictive in nature too. If it's predicting what you'll do next based on your movements, that can totally play into I don't know, I'm thinking of a futuristic world where you're wearing these smart shoes that need no match. It's kind of all built into one system. You're looking around and you go to turn and there's an advertisement that's tailored to you because there's some system that's listening to your it's almost like the targeted ads on your phone, but now it's everywhere in the environment on these screens because it knows you are looking. And that's kind of this scary future that I think a lot of marketers want. But yeah, I will say sort of my last point here is kind of looking at the future of research on this. They do mention that the authors of this paper do mentioned that they want to conduct tests on smart shoes outside of this lab environment to see how other conditions like temperature, humidity, impact the accuracy of the sensors. Beyond that, I'm really excited to see where these researchers think that there's application outside of the domains that they've already mentioned. Outside of the domains that we've mentioned. Somebody listening somewhere is yelling at us saying you didn't bring up this domain and it would be totally cool in this domain. I know somebody out there is yelling at us right now and I want to hear from you. Like, where is that domain? Like, tell us. It's super exciting to me and it's just a simple oversight on our part that's just wish we knew the future of this is exciting and yes, there's some things that we got to figure out, but I don't know. You've turned me around on smart socksbury. Are there any other points that you want to make about this article before we move on? Yeah, if they've got some going spare that they want testing, then just send them over. I'm more than happy to do that. Same here. Well, thank you to our patrons this week for selecting our topic and thank you to our friends over at MIT News for our news story this week. If you want to follow along, we do post the links to the original articles on our weekly roundups on our blog. You can also join us on Discord for more discussion on these stories and more. We're going to take a quick break and we'll be back to see what's going on in the Human Factors community right after this. Human Factors Cast brings you the best in Human Factors news, interviews, conference coverage, and overall fun conversations into each and every episode we produce. But we can't do it without you. The human factors. Cast network is 100% listener supported. All the funds that go into running the show come from our listeners. Our patrons are our priority, and we want to ensure we're giving back to you for supporting us. Pledges start at just $1 per month and include rewards like access to our weekly Q and A's with the hosts personalized professional reviews and Human Factors Minute, a Patreon only weekly podcast where the host breakdown unique, obscure and interesting Human Factors topics in just 1 minute. Patreon rewards are always evolving, so stop by Patreon.com Humanfactorscast to see what support level may be right for you. Thank you. And remember, it depends. Yes, huge. Thank you, as always to our patrons. We especially want to thank our honorary Human Factors cast staff patron Michelle Tripp. Patrons like you keep the show running, keep the lab running. And speaking of the lab, we have been hard at work on Human Factors Minute. It's our supporter only podcast where we do a bunch of research on a section of Human Factors UX HCI. Break it down in, theoretically a minute or less, but looking at some stats, you're actually getting a little bit more than a minute on average. So we're hard at work over here. Barry, last time we checked in on Human Factors Minute, I think we're at 121 episodes. You want to take a guess as to how many episodes we have now, but it's going to be more than that. So I don't know. Over 130? Over 130 is a great number. In fact, we talked about this eight weeks on the show. Sorry, ten weeks on the show. We do the check in of the Patreon stats every ten weeks, which is all interesting. I didn't get that. I should have realized my patent matching isn't very good. So if you think about it, right, we should be at 131, right? Ten more weeks. No, we're at 136 because we did a whole bunch of episodes for pride that we released out publicly. So we're at 136 episodes with a total run time of 2 hours, 49 minutes and 10 seconds. So really exciting. And then we also have an average length of 1 minute and 15 seconds. Like I said, you're getting a little bit more longest episode and shortest episode really haven't changed much. We still have the Human Factors ergonomic Society Technical Group of Surface Transportation and the HFE tag system safety, Health Hazard Survivability. Three of them at 1.59. We're trying really hard to come in under that two minutes. So that way we can say at least the leading number is 1 minute. And then our shortest one was on the aging technical group at 40 seconds. Hemofactor's minute is something that I'm incredibly proud of. It is a passion project for not only me, but so many people in our Human Factors cast digital Media Lab. They've authored many of the scripts, even though Barry Blake myself read them, they've authored many of them. And yes, this is still an exclusive place where you can hear that Blake content. We're still working on that. We're still working that, I promise. Sorry it has to be behind a paywall, but anyway, let's get into this next part of the show we like to call

 

 

that's, right, it came from this is the part of the show where we search all over the Internet to bring you topics that the community that could be anybody. As long as it's talking about Human Factors, it's good. If you find this stuff useful, give us a, like wherever you're watching or listening to help other people find this content. Algorithms. Yada yada. Anyway, we got three tonight. This first one is by Regina Squirrel on the UX research subreddit. They write red flag in the job description. Wouldn't want to work for a boss like that. They then submit an image with a portion of this job description circled. The part of that circled is, the candidate must have a minimum of five years cumulative experience performing research duties. Any gaps in between projects or assignments do not count towards the five years. Barry, do you think this is a red flag? Yes, it is. But I've got no problem with people posting adverts like that because of exactly what it's done. It's really highlighted to me. I don't want to go and work there if you're being that specific. They must have a reason for saying that for themselves. And for whatever reason they're bringing it up. I think it works because as it's been highlighted already, people think it's a red flag. They're already sitting there going, well, actually, I don't want to apply for that because it doesn't sound for what it is that I want to do. And it might be completely wrong. There might be amazing company to work for and all that sort of stuff, but they must have a reason in their heads about why they've come up with that. I think it's great because I think it gives you good warning of what it is you're going into. You cannot go into that job interview if you applied, but you cannot go into that job interview and not think that they might be potentially difficult to work with, is what I get from that. What do you think? Look, I think so. First comment I'll make is that there were a lot of people in the comments section of this reddit post pointing out other things that were bigger red flags than this. Like the six to twelve plus months is the duration with high possibility of extension. That means six months, that means they're not going to keep you. But to me this type of thing means that they've ran into an experience where and we'll talk about this a little bit later in another it came from, but they've ran into somebody who didn't quite meet their expectations for what they wanted within this role. I would imagine it was somebody who came forth and said I've worked as a researcher or human factors practitioner for X amount of years but then I switched over and did this other thing and then I switched back and they considered it five years of research or five years of experience and they're saying it doesn't count because they've ran into that situation and to me that screams that they don't know exactly how to articulate the type of person that they're looking for because this seems so sloppy to ask it for it this way and to lead with that. To say you need five years of cumulative experience. I think that's pretty well understood and you just had a bad experience with somebody. I don't know. I feel like this also could have not been written by the people who are hiring, this could have been written by a recruiter, this could have been written by HR, this could have been written by any given person in this company depending on how big. We don't really have that additional context but to me it screams of we've had a bad experience and we're trying to control for it by putting more words in our job description. Yeah, I think quite possible. I mean the rest of the spec actually is once you get down to the scope of what it's meant to do actually that's quite well detailed, I think give you a really good insight into what they want. So they clearly do know what they want and like I said, for whatever reason they're sort of highlighting right up at the top but they don't want this. So I think if people are scared off by that then it automatically gives you that idea of well, actually don't apply. All right, let's get into this next one. This one's by CAS 18 cash on the user experience subreddit they say interview Question which part of UX are you bad at? This caught me off guard. I want to know how you see it handled when you're interviewing someone for a mid career to senior role. So I just thought this was a great springboard. Barry, what part of UX or human factors are you bad at? Well, I'm not bad at anything. Nick I'm possibly not very good at admitting what I'm bad at. It's not necessarily I'm bad at things. I think there's things I'm less experienced at, so human safety analysis, things like that. Human liability analysis is stuff that I haven't done as much on. I think there's elements of domains I haven't done that much experience in. So having done some stuff that I was doing this week, I realized that there are certain sectors that I was actually could be applying human factors to, but I just have never done it. So nervousness about pushing that sort of stuff. But I think this is one of the things I do like about the human factors world is kind of the principles stay the same, trends come and go, there are flavors of the month, there's different bits. But actually, if you've got core principles at heart, generally you'll get yourself by most things. There must be something I am really bad at. I might blurt it out later on halfway through. It was a tough one because clearly I'm not brilliant at everything. Did this catch you up too? Right, I think it has. There are bits I'm not very good at that's because I'm not bad at me just because I'm not done them. And we have to think, well, what about you, Nick? What are you terrible at? Okay, they said bad, not terrible. So look, I picked this question and I said, oh, that's a great question. I don't know how to answer, but it's a great question and I also don't know how to answer because, yes, being able to identify our own flaws is a skill and so if I had to go with something, it'd be identifying my own flaws. But seriously, I think the thing that I struggle with a lot of times is switching domains. And I know you kind of mentioned it, but being able to think, or not even think, but to understand even at a base level what's going on in a particular domain, that's complex. So like when I shifted to defense after being in electrical for a while, I had a really hard time understanding everything about military structure and that's very complex when you think about the relationships between people. And so there were some things that I could learn, but then there were things that I needed to learn through experience and actually talking to people and that's actually one of my most enjoyed parts of the job, is talking to people and understanding exactly what they're doing. I think my weakness is a strength, but in reality it takes me a long time to truly understand what's going on. I recently made the shift to supply chain logistics and I just now feel like nine months in that I'm starting to kind of understand it. And there's plenty of materials out there that you can read and try to understand, but until you really see the whole thing in action, that to me is kind of my weakness. But yeah, you're right. There are other transferable skills that go from domain to domain. I know how to talk to users, I know how to set up research, I know how to do those things but understanding the domain is a little bit of a challenge for me. It's just like switching gears and starting from scratch to go in another direction, it feels like, for me. Have you thought about it anymore? Do you have anything else? Picking my battles can be a struggle, so it's the ones where you see things going wrong and you want to solve everything. I've got no problem with prioritizing my work, prioritizing my battles. I know which ones are the important ones to fight, but I still want to solve all of them. And sometimes I can struggle to step back and say, actually, don't leave, we need to go and do something else. Stop trying to solve them once. And I guess the other one is that, as you sort of have it, when you go into a new role or something like that. I do suffer from imposter syndrome. Massively. So if you go into a new domain, and I've sort of had it fairly recently with the new role, I sort of picked up and I've gone into a brand new domain and a new perspective, I think is a good way describing it. And I did start to go, well, what happens if I mess this up? I can't do this job. This is mad. Why do they think I can do this job? And then within a couple of weeks you're like, yeah, of course I can do this. What am I worried about? Then? People come and ask you questions, so they're probably two things that they're not particularly uxy things, I don't think they are just general sort of management, sort of managing yourself type stuff. But there are two things that can trip you quite a lot. There we go. And myself. You save it. All right. Last one up here from the UX Research subreddit. This is by Churata and they write, how often do you find it difficult to recruit candidates for interviews? So this is actually a senior question. That's really all they wrote, or really what actually happened is their post got deleted and we couldn't get the raw test. But I thought the prompt was enough of a good question to ask, because this is a more senior question from mid level management to management perspective of actually hiring candidates that I think is really important to touch on, because we do know that some of you listen to the show and we don't get nearly enough of these types of questions. So, Barry, what generally is the market like for you, especially from the UK perspective? I can talk a little bit about the US, but what's it like from the UK perspective? It's interesting. It's certainly there's a lot of rules out there at the moment. There are definitely positions available, but it's interesting because some of them are just open. If you go and talk to people, then people can make spaces. We have perpetual positions open all the time because there's so few people, I think, coming into the marketplace from getting their masters or getting the qualifications and stuff, comparatively few people that you kind of want to find the right person. So given that I run a small company, we constantly take our time to try and find the right sort of people. The thing that I struggle with as well alongside this is there's a big drive at the moment and I completely get it about putting salary numbers onto roles, which you should absolutely do. But then I do suffer myself from say I just want to meet people because actually you might see somebody who's either brand new, shiny graduate you can see massive amounts of potential for, but you might also have somebody who is either want to do flexible working or retiring, or coming close to retirement, one more job and all that sort of stuff. And all of these people bring different values. And I quite like to recruit on what's going to build my team, what's going to build what is going to bring some nuance to the team that maybe we don't have, but that's because we're a small team. If you're a larger team, then sometimes it can be well, one way it is to recruit, if you've got that production line type approach, I think it is difficult to recruit only because there isn't enough people in the marketplace at the moment. And also personally we try to find right people, but even on bigger projects you always see the phrases coming up must have 612 months experience and things like that and there just isn't enough people out there with that right side of experience. But that's in the UK. What's it like in the USDA? Exactly the same. I will tell you. It's so strange because you go and look at the market and there's this abundance of positions available. It's definitely a workers market right now. I know some companies are kind of shutting that down and doing some hiring freezes, but generally there's an abundance of positions available because I think right now we're starting to understand the importance of us in our role and then there's maybe not so many people available for those roles. And the trend that I'm noticing is that what that means is that a lot of companies are promoting from within, maybe people who are or are not ready for those more advanced roles. And because of that they jump up in title. And so when you are requesting a position of a certain title, you might get candidates that in your head don't necessarily meet the expectations of somebody at that title. And I find that that is r1 big challenge right now is because of this sort of early promotion in a lot of the workforce, you kind of have to reset your expectations for what a senior in a role is, or what a mid level role is, or even what a management role is. And so there's this weird adjustment of expectations right now. And because of that, yes, it's very difficult because even when you find people to apply for that position and you talk to them, it's like, okay, this is not matching the profile that I had. And then you run into issues like in that first question where they had to describe very distinctly what that experience was like. Five years cumulative, no gaps. So you run into weird situations like that where you have those expectations aren't being met, or those expectations, I should say, on the hiring part, isn't adjusted to meet the expectations of what the market looks like. It's a very weird market right now. It's weird. It is. We're actually going to be putting out literally recruitment drive in, I think, in the next month or so, because we're going to do a two stage or two burst of recruitment over the next twelve to 18 months. And one of them has got a bit like that earlier. It has a language application. So we've got a similar thing where actually I want a human faculty practitioner of any sort of standard, to be honest, as long as they fit the team, but we want them to speak Welsh because of where we're at in our geography and this, that and the other. And funny enough, there's not that many human factors practitioners out there in the first place. To find one that also speaks Welsh is going to prove incredibly difficult. So the more precise you end up being on your job at, the less chance you've got to filling it just because of the way the fact is the moment it's a careful balance on the more senior side to write up those job descriptions. All right, well why don't we go ahead and get into one more thing? We don't need an introduction for this. Barry, what's your one more thing this week? Well, I've started a new sport. I tried to come to school, but I tried to do artery. It's something I did years ago, but then past couple of weeks I've sort of got back into a beginner's course and things like that. I got a very aching shoulder because I was doing it last night and it was good fun. But what I found about it is it's a really good way because you have to focus on what you're doing. You have to sort of clear your mind to be able to focus on basically putting arrows down the range and things like that. And I'm just slowly enjoying at the moment, picking up some new skills and actually being it's a very, I guess a physical thing to do, but without anything like running and things like that that make you exert yourself. So it's quite a strength, but a bit of a stamina thing. And yeah, I'm thoroughly enjoying it. Yes, you got tired of hitting bull's eyes on the podcast, so you needed to hit bullseye in real life, right? Something like that. Yeah. All right. For me. My one more thing this week is new internet. I don't know if you've all been noticing, but Barry, how's my audio quality been tonight? You've been mostly okay. Mostly okay. Mostly okay. Except for the way you messed yourself up by leaving yourself a mute. Yeah. So that's really the only issue. Color me surprised, because I'm not going to mention the company, but there's a company that gives you service that is, they give you basically a router that connects to mobile Internet. I'm on 5G right now having a podcast with you streaming everywhere. We've been taxing this system. I unplugged my other Internet, didn't stop paying for it yet. I'm testing this still. And the real test is, if I can podcast on this, color me surprised, because I am getting better speeds on 5G through this thing because we're like less than a mile from a tower. It's like right there and I don't know, it's killer speeds more than we're getting on our wired line. There's just a couple of quirks with it that make it harder for things like VoIP, which voice over Internet protocol for a lot of what I do for my job, meeting with people. And so it's almost there, and I'm very excited about it. And it's so good. It's just that little bit that's holding me back. And it's so cheap, too. Anyway, that's my one more thing this week.

 

 

Yeah. All right, that's it for today, everyone. If you like this episode, enjoy some of the discussion about socks. We encourage you to go listen to wearables. Yeah, there you go. Try listening to episode 201 mounted wearables for human computer interaction. See what Meta is doing over there. Comment wherever you're listening with what you think of the story this week. You enjoy me messing it up. I don't know. I did. For more in depth discussion, join us on our discord community. Visit our official website. Sign up for our newsletter. Stay up to date with all the latest Human Factors news. If you like what you hear, you want to support the show, there's three ways you can do that right now. One, give us a five star review. Just stop what you're doing. Go, leave us a review. Make it good, all that stuff. Tell your friends about us. That is the second way that you can do it. Word of mouth really helps us grow. Say, hey, this Nick guy is messing up all over the place. It's like going to NASCAR for crashes. You should check it out. Or three, if you are financially able and can look past my faults, maybe support us on Patreon. You get access to human factors. Minute. As always, links to all of our social and our website are in the description of this episode. Mr. Barry Kirby, thank you for being on the show today and carrying me once again where can our listeners go and find you if they want to talk about where they can put their socks? If you want to go and talk about socks, you can find me on Twitter and across social media. Or if you want to hear some of the interviews we've been doing over on Twelve or Two Humanfactor's Podcast, then find it at Twelve Two Podcast.com. As for me, I've been your host. Nick Rome. You can find me on Discord and across social media at nickrome. Thanks again for tuning in to Humanfactors Cast. Until next time. It depends.

 

 

Barry KirbyProfile Photo

Barry Kirby

Managing Director

A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.