Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
Dec. 16, 2022

E267 - Generative AI: A disruption and A Game Changer

This week on the show, we talk about the human factors behind AI content generation. We also answer some questions from the community about tools for tracking multiple research projects,  self-education, and tips for those just starting their graduate degrees.

YouTube podcast player icon
RSS Feed podcast player icon
Apple Podcasts podcast player icon
Spotify podcast player icon
iHeartRadio podcast player icon
Google Podcasts podcast player icon
Amazon Music podcast player icon
Overcast podcast player icon
Castro podcast player icon
Stitcher podcast player icon
PocketCasts podcast player icon
Castbox podcast player icon
Podchaser podcast player icon
TuneIn podcast player icon
Deezer podcast player icon
Spreaker podcast player icon
Pandora podcast player icon
RadioPublic podcast player icon
Podcast Addict podcast player icon

Check out the latest from our sister podcast - 1202 The Human Factors Podcast -on The CIEHF - behind the scenes - An interview with Tina Worthy:


Our latest Deep Dive is out now!



Human creators stand to benefit as AI rewrites the rules of content creation


You can help choose the news for next week, here:


It Came From:

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:



  • Have something you would like to share with us? (Feedback or news):


Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.


Welcome to Human Factors cast your weekly podcast for Human Factors Psychology and Design.


Hey, what's going on, everybody? This is episode 267. We're recording this episode live on December 15, 2022. This is human factors. Cast I'm your host, Nick Rome. And I'm joint Day by Mr. Barry Kirby. Hey, great to be here. There you all are in all your pixelated glory. I don't know why you're coming up. Pixelated on my end. Anyway, we got a great show for you all tonight. We're going to be talking about this interesting intersection between content creation and artificial intelligence. We got a lot to say on the topic, or at least I do. We also answer some questions from the community about tools for tracking multiple research projects, self education, and tips for those just starting their graduate degrees. But first, got some programming notes here. I wanted to let you all know that we have a new deep dive out there in the world on our blog, so you can go check it out now. It's a deep dive into the human factors of fitness technology. And this is written by one of our human factors cast Digital Media Lab members. Morgan Morgan spent a lot of time and effort researching this. We like to think of them as scientific journal articles with health peer review and without the publication. So go check them out. I don't know. It's a good read and gives you a good sense of what's going on in that space, especially since the New Year is rolling around and everyone's got those New Year's resolutions to get that fitness technology out of the boxes that they've been in all year. Hey, we do have some updates on stuff. We are also now choose the news. That is something that we give to everybody. I guess if you want to say give we've traditionally done it on multiple platforms, but now we're kind of consolidating down to one platform. We're just doing it on our patreon now. However, just because you're not a patron doesn't mean you can't vote in it. All you have to do is go to the post. It's open to everybody. We can see our patrons when they vote and when the general public votes. It's just a way to kind of consolidate down. It's a lot of work for us to look across all of our platforms every week. So go check out our patreon, whether or not you're a supporter or not, and vote on the post for next week. You can vote on what you want to hear. Also, just another quick. I feel like I'm rambling here at the top. Anyway, I want to get to the stuff. Anyway, here's a preview of upcoming schedule. Here we're on next week, and then after that, we'll do our annual recap episode the week of the 29th. Those are always a lot of fun. Barry and I get all festive, and we talk through all the stories that we didn't necessarily cover on the show all throughout the year. And then we are back here on the 5 January with another show. Barry, what is the latest over at twelve two? So at twelve two we're still listening to Tina Worthy, who's the CEO of the chapter Institute of Ergonomics and Human Factors. So basically she gives us a bit of the inside edge about what goes on behind the scenes, producing, basically making the Institute run and running volunteers and the staff and how on the surface, everything just looks like everything just calmly happens and nothing is too much trouble. And behind the scenes they're running around trying to make everything happen, trying to be as helpful as they can to everybody. So, really great to get that sort of insight behind the scenes. Excellent. Well, we are going to get into the part of the show that we like to call Human Factors News.



That's right. This is the part of the show all about Human Factors news. Barry, we got an exciting one tonight. What is the story this week? Anybody would think that you're a little bit excited about this one, but it's the fact that human creators stand to benefit as AI rewrites the rules of content creation. So the 150 year old Colorado State Fair has sparked controversy by awarding its top prize in the digital category to an Aigenerated artwork. The decision has raised concerns about the potential of AI to put human artists out of work and the difficulty in evaluating the quality or originality of AI generated content. Demand for Aigenerated content is expected to rise in the coming years, with Gartner predicting that by 2025, generative AI will account for 10% of all data created. However, the rise of generative AI has also sparked debate about its potential to perpetuate existing biases and inequalities and be used for unethical purposes such as spreading disinformation. So Nick, is there any point in us actually turning up anymore? Is an AI version of us just as good? Would anybody even know? Okay, I don't even know where to start with this. I'm just going to give my initial thoughts here. AI is amazing. I've had a peek at infinity over the last week and the claims made are not unsubstantiated. I'll talk more about it in a minute. But Barry, what are your initial thoughts here? So I haven't been able to geek out as much as you have this week and looked at it. I've had a look at a couple of different things and we have played with that a little bit in the past. And for me, from what I've seen so far, it's brilliant to generating content, just putting words on the page. And that's something I struggle with a lot. When you're writing a report or you're writing an article or something, just getting words on the page, it's much easy to manipulate it. But to get words on a page, is that where is something I find difficult, so it is very good for that. But I can also see where it lacks, I don't know, heart, depth, whatever them sort of emotive things, I guess that we put feeling behind things. So when you look at the text, a lot of it is where it's generated. It's really good at that superficial level, but can it actually generate an argument? I know when we were doing the show, doing the show notes, you had some ideas about that, or you generated some ideas about that. I did, and we'll come on to that. But I don't know whether the argument it generates is, does it have that weight of passion behind it or is it just a collection of words that it brings together? So, as I said, I think it's great for certain things off in a document. I think it will revolutionize it, and already is revolutionizing some of what we're doing. It will make things happen quicker and it allow us to do things. The specific art issue, I think previously and probably twelve months ago, 18 months ago, I'd have said, actually, it's great. You know, I haven't seen some of the pictures it produces. Art is art, is it's pictures. It's just it's it's nice graphics on a wall. However, I've had a slightly different interpretation to it now because my daughter has been doing an art course. Both my daughters have been doing lots more art. I don't know where they get this artistic flare from, considering both me and my wife are engineers, but they've got this artistic flair. And when my eldest daughter was coming to do her final project at college, if I had just gone and seen the final installation of what she did, I would have been, that's pretty, that's nice. I'll see what it's about. But because I knew the story of how she got to that final piece, the thought that went into the different stages and what she did at the different stages, because it was all a piece about feminism, and it was all about actually about feminism eating itself. But the thought and the processes that she got to to get there made you think, well, actually is art at that point more than just the picture? Is it the process? Is it the passion? Is it that artist story that you then know about that went into the picture, which you just don't get with it, you won't get with AI. So AI can create a superficially, meaningful and very pretty picture. So I can see where people get concerned that if you just want the picture that looks pretty to go on the wall without any sort of effort behind it, well, it's going to cater to that market, isn't it? If you want something that is possibly a bit more meaningful, or you want to learn about the artists anguish as they developed whatever it was, then can it do that? I think it will be able to. And that's a scary bit. I think eventually it will be able to convince you that there is something there, but I'm not sure. Yeah, okay. Genuinely, my brain is kind of broken right now, which doesn't happen a lot in terms of getting my mind blown. I guess it's far and few instances where I am this broken as just what is possible here. So let me address that exact process that you've talked about, the AI. You can have it create something, and if it's word based, text based, you can have it describe why it made those decisions and what assumptions it was operating off of and why it got to that point, which is just insane. And then if you think about it, is that art? Somebody put their time and effort into developing this AI and therefore this AI as a result of somebody's art. Anyway, my mind is blown because this for a variety of reasons. One, there was an expertly crafted blurb that was exactly 200 words, and you chose the AI's version over mine, which is not that's fine, but here's the thing. Yeah, go. I'm just saying here's the thing, is, like, Mine had a little bit more context around basically what was contained in this article. And I think the main difference between Mine and the other one is that it brought in certain things that I didn't about basically predicting what the future holds in terms of how much content generation will be AI in the future. And mine was more sort of focused on the creative process and a little bit more towards what that means for creatives going forward. And so those are kind of some of the main differences between the two. But Barry, I'm curious, why did you choose the one that was not mine? I was going to choose yours, but then I thought, given what we're talking about, I chose the AI version purely to set the scene, really, because I think at that high level, what it generated was spot on. Let's say we generally have a specific workout that I generally ignore, but you can tell that actually it's stuck to it. It's done exactly what we asked it to do. And on the face of it, the blurb it gave you was perfectly adequate. But the blurb that you gave, that you put together, I think it added more depth, it added more heart to it. So it picked out different things about the fact that things like it was a fine art competition, it announced winners, it had a bit more to it. Now, I guess the argument is that if we've given it the bigger word spread because yours wasn't 200 words, mine was 200 words, it it did it more succinctly than mine. Oh, right, okay. That's what I'm saying. I picked one that was exactly 200 words. Yeah. The main reason I think in hindsight well, no, I still buy why I chose that one because we might have messed around with this before, and I've never chosen the AI Blobe in the past. One of these days, I'm just going to write one, and you're not going to know if it's AI or me. And that's it. I mean, you could easily turn around now and say, well, actually, I swapped the names, right? I swapped them. I wouldn't know the difference. I think at the moment, maybe I would. Or maybe I'll be like, oh, Nick's game is off tonight. But unless you're actually looking for it, then I don't think you truly know at this level, writing a pricing of an article or something like that, because it hit all the high points, it added some data in there. It was perfectly usable. Yeah, I want to talk to a little bit about my experience over the last week, because I think it's important for people to understand what I mean when I say I'm peering into infinity. This reminds me a lot of when Google was new. If you can remember that, then welcome to the club. If you can remember when Google was new, it was one of those things where you sat there and you wait, can I search this? And then you search it and, whoa, that came back. And you keep doing that. And this feels very similar for me in the sense that I am making prompts in different ways to try to push the system to do something, except in this case, it can bring back any number of things, and it's conversational. We actually have a story in the hopper for next week. If you all want to hear about it, let us know by voting. But it's on the UX of this program that I've been using, Chat GPT, and it's a great article on that. But basically I've been playing around with this over the last week, and I need you all to understand where I'm coming from. So let me just talk a little bit about some of the things that I've been doing behind the scenes here at the podcast with the help of AI. Okay? So it's no secret that we have a lab here. We have a lot of different things in the hopper in the lab. And some of the things that we've been working on have been in development for a long time. It's just a matter of time and resources that we have to throw at something. So, in terms of podcast ideas, I showed this to Barry before we actually went live. We don't want to release what these are yet, just because we're not quite ready there, but had drafted up outlines for these ideas that we've had in the hopper forever. We had a I generate sample scripts based on those outlines, had AI generate a voice to go with those. It took me about an hour to go from concept to fully produced sample of about three minute long sample of an hour long podcast, which is just insanity. And Barry can attest to how they sounded. You can tell that they're artificial humans in one way or another, whether it's the words that they're using or the way in which they dictate the words. But it's really impressive. I did the music on those, by the way, Barry. I haven't got that far yet. Just get that. But the thing is that that process itself would have normally that would have taken months or weeks. Like I said, we've had some of those things in the hopper for years and we haven't touched them. But because I had this and we had like a rough outline before and had some key points that I could throw in there, it actually filled out some of those other points. And I'm like, oh yeah, that actually makes it flow together really nicely. Human Factors Minute is another project that we have ongoing, okay? And basically one of the downfalls of AI right now is that it's not smart in the sense that it can't bring back facts that are reliable. It has a large sample, and so it'll bring back kind of the middle point of what everyone's saying. And so something is kind of but if it's on a scientific term, there's maybe not a whole lot of debate about it, unless you're talking about the Myers Briggs. But we are able to pull some of those things in and at least using it as a starting point for a ton of upcoming things rather than it puts us in audit mode, rather than sort of the hunter gatherer mode of trying to find facts. And I come back, give me a couple of citations that we can go chase down. Internal documentations. Things like white papers, charter agreements, guides, and documentation for our internal processes and procedures that have been sitting on the shelf for months are now written and drafted. Other uses that we've had around the podcast blog post, so for outlining formatting, rewording things, search engine optimization announcements, rewording some of the samey stuff that we put out there every week, like, hey patrons, my mind is finite and I can't think of a million ways to say things differently. Thumbnails. You see it in this episode here. Just look at this masterpiece that it brought back. It's got Barry and me right here according to what the Internet thinks we look like. I mean, you can see some resemblance between these two here, but I look like a slightly demented demon or something. Yeah, I don't know what's going on with Barry, but anyway, you get the drift is that we really have even some of the hashtags that we're using on some of our shorts just out there. It's really helped with the discoverability. So we've used AI in a lot of different ways over the last week and have seen success in a lot of it. And it's just insane. It's insanely powerful and I don't think people quite realize where we're at right now. This is truly transformative and will change our society. It will change our society and it's like anyway, mind blown, yes. And some computer scientists are like, yeah, we've known about this and I have two. But it's a different thing when you actually sit down and play with it and you're like, oh, whoa, dude, it did that thing. Let me tell you about the most, I guess, interesting example that I keep bringing up to people. You can feed it information and it'll put that information in a format that you've sort of specified. So what I've done for one of my mentees, I was telling them about this and I said, okay, let me just demo this technology. And I said, okay, well, is it okay if I use your resume? And they said yes. And I said, okay, well, I'm going to put in your name and I'm going to say, write me a cover letter for their name who wants to work as a user experience researcher at Company. And then using the following information colon, copy their entire resume, paste it in there after the prompt and it comes back with a cover letter that is not only accurate, but formatted well and describes their experience well and has all the information right where it needs to be. This thing blew my mind when it did that because you can feed that information and it can modify that information based on existing formats that are out there. So I said, don't use this, but take this as a starting point. We can talk about this in a minute, but it introduces a whole other question about ethics and plagiarism and just like, who owns this stuff? So I don't know. Barry, where do you want to go? I've talked a lot about what infinity means to me. I think one of the first things to really hit about why this is such a step change and certainly from an HF perspective is accessibility, is because previously AI was sort of the playground of the geeks, wasn't it? The scientists, the software engineers, if you could do command line prompt stuff, then yeah, you were part of the game, but it wasn't accessible. Whereas now the stuff that Chat GPT that we've been playing with and some of this other stuff was, it wasn't DaVinci, was it was the Dali AI that does some of the pictorial stuff as well. It's now been published in such a way that a, it's open access, so anybody can use it, except at the moment because they're at capacity for now. They'll likely charge for this in the future. Just so that we're clear, we're recording this in December of 2022. So if you're listening to this in the future, it's possible that they may have started charging for this, which I would easily pay for is basically, as you've highlighted with the letter for the resume, the ability just to say, I want this in this format, using this information. It's fairly simple and actually it was fairly free structured as well because you didn't have to give them any special formatting or anything. Just took the information into a textbook effectively and it starts putting stuff out at you. But then it goes to the next steps. It thinks about how do I deliver that information? So it's delivered that as a text. You can download it as a PDF, you can download it in different formats, so then you can use that. So it's made something that was kind of really unique. It's making it mainstream. And because the thought about the accessibility of this now, then anybody can go in theory, anybody can go and use it even just sort of the deck. Because we said in the pre show, I've played around with it a little bit. I didn't realize the sort of really full potential of it. But you could easily see that if students are using this for doing their essays, if bid managers start off writing their proposals for work using this, if some of the letters you write to your relatives at Christmas, for those of us who still I don't other people do write letters to relatives and friends at Christmas. You could start churning individual letters out and not write a thing, and yet everybody might think they've got a nice personalized letter. Some of the stuff, I mean, just the volume of content you can get out with this but still have enough information in it that it's not completely just generic wipe is mind blowing. And I guess really the big thing is as well, is it will be useful. It's not just creating words, it's not just creating filler content, it's creating useful stuff that you can and will use. So that's for me, the bit that is mind blowing. But I guess then that does take us down to the level of, well, should we hit ethics first? Because ethics seems to be popular at the moment. I really do think so.



I can attack this from multiple perspectives that I've experienced throughout the week. Okay? I've used this for more than just podcast stuff, and some of that is in my One more thing. So I'm going to hold off on telling the full details there, but I've used it to play games as well. Okay, let's just talk about ethics in general here, right? So there's some things that obviously the Plagiarism piece is it your piece of artwork. If you generate it based off a prompt, well, you are the person who put in that prompt, but it is pulling from a sample of a million different other artists, right? And so, like, if you say in the style of a specific artist, well, then you're explicitly ripping off that person's art style. That art style. People do that anyway. They do. So that's how artists work. People get inspired. I mean, we call it being inspired. I'm in the camp that all of the world's art has been generated by something that they've either seen or done, and it has been produced by something that that artist has put out into the world. Traditionally, it's been pen and paper or paintbrush and canvas or digital canvas and mouse. And you can see how things have changed over time. And now we're getting to a point where the artist where it's democratizing art in a lot of ways. The artist is the person who puts it into a word prompt. And it takes everything that you every tag that you've put in there, and it figures out a way to put it together. And the composition isn't necessarily yours, but the composition of the words that you put into the prompt. Or is it yours? I don't know. It's this weird ethical question that we have to and there are things that I haven't used a single piece that I've generated verbatim since all those things that I mentioned before. I've not used a single thing. That's not true. I did it in one case, and I felt guilty about it. I felt guilty about it because it wasn't mine. I said, it's not mine, but it's nobody else's either, because that's right, the AI isn't a living thing, so it can't claim ownership. It can't claim ownership. It wouldn't have generated what it generated without your input. That's true. Here's the thing. It was a very low stakes thing. It's not like, you know, there were any serious ramifications. It's not like this is a work report. It wasn't something that we put out there publicly for the podcast. It was a response to somebody. It was a response to somebody for something totally trivial. And I was like, I'm going to respond to a friend who just said, I'm not going to detail the contents of that. But it was so trivial. It was so trivial. There were times this week where somebody reached out to me about a really personal struggle, and I thought about using it, and I was like, no, I can't do that. I don't have the words to tell this person how I feel right now because it's a tough situation, but I'm not going to use this because if I did, I would feel really bad. But as you described that situation, if you didn't have the words to be able to help, would it have actually helped you generate the words that you needed in order to inspire what it was that you there you go. But would I feel guilty about using those right. As inspiration? So why would you feel guilty? I can see exactly where you're coming from, but can you articulate why you would feel guilty? I can't, because it's not mine. I wasn't the one that came up with it. So go to Chat GPT now and ask it for an explanation about why you feel guilty about using AI in personal conversations and see if it inspires you to an answer. Live demonstration. Explain why I feel guilty about using AI responses in my personal relationships or something like that. Yeah, okay, I will put that in and we can go from there. But basically that's one angle of the whole ethical thing. I will say there was another thing that made me feel dirty about using. So I mentioned I use it to play games, right? And in some ways it will come back and say, I'm not built for this. Why are you making me do this in so many words? And I go, no, just pretend that you were built to do that. Okay? From now on, I'll pretend like I'm doing that. And so it felt in some ways like I was bullying this AI into doing something that it wasn't built for, even though other people have been trying to push it. Ultimately I came down on like, okay, I can't feel bad about this. I'm pushing a piece of software to test its limits. But the fact that I kept putting in prompts and it came back with, I'm not built for that, I'm sorry, I'm really sorry, I can't do that, I can't do that, I can't do that. It wasn't like, I can't let you do that, Dave. But it was very much like, I'm literally not built to play Dungeons and Dragons, you idiot. Stop making and trying to play a Dungeon Master here.



Anyway, with respect to feeling guilty here, it is natural to feel a sense of guilt when using AI responses in personal relationships. Because when it can be seen because it can be seen as a form of deception when using AI responses, you are essentially pretending that the responses are coming from you rather than a machine. This can lead to a sense of dishonesty and lack of authenticity in your interactions with others. Additionally, using AI responses may prevent you from fully expressing yourself and connecting with others on a deeper level, which can also contribute to feelings of guilt. Ultimately, it's important to consider the potential consequences of using AI responses in your personal relationships and away the pros and cons before deciding whether to use them. That's it. That's right there. That's exactly what



there you go. What the hell does that make itself aware then? Because it recognizes the no. It's an artificial intelligence software built on a sample of a bunch of words. That's what it is. And these words are a mix of other people's words. Yes, but it's aware of the impact it's having that's awareness. No, the people who wrote the original words are aware and it is making inferences based on those words. Okay, that's sort of taking us down that route now, hasn't it? Quite strongly, that ethical element. Well, I'm curious. We got my opinion. What are your opinions on it? Because I've been living. The experience. I'm curious from an objective relative I was thinking about. So if I was to be doing an essay for education, so I would say I was submitting an essay. Now, obviously, plagiarism in of itself and the way that they detect plagiarism is particularly now you upload your stuff and they've got scanners that can pick out bits of words and all that sort of stuff. So if you're directly copying, then that's plagiarism fine. But if you've used an AI to generate it, to generate the text, there are two levels here. One is that you've given the input, you've told the AI what it is that you want to write about and the facts in there. It then generates the nice pros in the scientific format, or maybe the English language format or whatever it is that you're designing for at that point. Is that plagiarism? Is that bad? Because going back to what we said earlier, it's still your input. You're not copying anybody else, but you haven't written it all. So it's not necessarily plagiarism, but it's not necessarily your work either. So then you go to that next step of, right, well, okay, if I use that as generating a skeleton, I then twist it around to make my own, to make it my own. How much of it do you need to rewrite for it to be truly your own away from the AI? So that's kind of one bit around that education piece. The other half of that is around work, around the day job. If I'm writing a report for a client,



you couldn't necessarily write the entire report that way. But if I wanted to write a two page executive summary, shook the rest of the report into that and say, write me an executive summary, that's witty and engaging, or whatever comes back, two page execsome. I mean, the amount of time that this could save you if it was used right, then that's good. But then is it, I guess, looked at the ethical part of it, is that the right thing to do for my clients? If they're expecting me to have developed a well, it wouldn't be me, it would be my wife, because I don't write as well. But would they expect a handcrafted executive summary or is it legitimate? Does it matter? Yeah, I think there's a lot of nuance about when and why to use these things, these AI content generations, and when it's ethical to do so. In the case where you say, yes, I'm going to chuck a whole report in there and just see when it comes back in terms of a summary, you've done the work. It's just writing it up in a way that is going to be taking out the important bits. What about the flip side of that? I write a quick two page executive summary and say, give me a 50 page report. Go, I'm tempted now. Okay, there's a whole lot of temptation now of testing this thing just in case anyone wants to do that. The way you do it is you build an outline. First you say, okay, here's my abstract. Build an outline based on this abstract. And then you have it build out each piece individually just because there's, like, character limits right now. Although in the future, who knows, you might be able to just write me a whole report. I've actually had it write me multiple children's songs and children's books.



This is weird, and I feel dirty about this right now. I am going to ask it to write me a children's song about human factors in the style of dr. Seuss. A children's song about human factors in the style of dr. Seuss. Have one on our discord. You can go see it, but it's going to come back with something completely different here.



Okay. So here we go over the things we do as humans with our bodies, minds, and hearts. We use our skills and senses to make our world a work of art. But sometimes we make mistakes and things don't go as planned. And that's when we need to think about human factors at hand. Wow. Human factors are important for every task and job. They help us understand how we can make things better. Yaba. So let's think about our actions and how they affect the ones we love. With human factors in mind, we can make the world a pleasure place. My dove. Okay, some of them are hits, and some of them are misses. I would have replaced a dove with my love on the page for chat GPT now because I'm at the capacity page, it's got the prompt is write a stand up comedy routine about the status of GPT. So you now need to do write a stand up comedy routine about human factors. This is going to be genius. Okay. All right. Is this just going to be the rest of I hope not. We have some really important points to get to. Barry, let's get to the point here. I'm going to do this, and then we'll move on. Okay. Here. Okay. All right. Thank you for having me tonight here. So have you ever noticed how weird humans are as a species? We do some pretty strange things, right? For example, have you ever seen someone put milk in their cereal bowl before cereal? It's like, hey, I want to make sure my eggs get soggy before I even start eating. And don't even get me started on our love of putting things in alphabetical order. Like, who decided that a should come before b? And don't even get me started on the silent e. And in the english language, it's like, hey, let's make this letter do absolutely nothing and just add an extra syllable to words for no reason. But the real kicker the real kicker is how obsessed we are with time. We have all these different ways to measure it and we're constantly checking our watches and phones to make sure we're not running late. And then when we do inevitably run late, we get all flustered and stressed out. It's like, relax, it's just time, it's not going anywhere.



I think it was doing it on humans and not human factors. This is when I would come in and modify the prompt to have quotes around human factors and add psychology at the end of it and



it would come back with something much better. I'll just read the first little bit of this prompt. Hey, everyone, have you ever noticed how we as humans tend to make the same mistakes over and over again? It's like we just can't help ourselves.



How do we account for this then? So, in terms of being able to we generate this content and we've already sort of touched upon the fact that there is an element of whether it's plagiarism or not. I would still out juries out, but there is a level there that we maybe feel a certain amount of guilt that we're not doing all of the hard graft. But is this just the next stage of so when we first got word processes instead of typewriters, when we've got typewriters rather than the pencil and the pen, when we type setting on a word processor, now we're not doing what the task was then. So is this just fundamentally the fact that we are still generating the facts, we are still generating the niche bit of content that this is based around and this is just flower ranging, this is just the stuff around it. So is that necessarily a bad thing? That's something I think we need to bottom out as a society and it possibly just goes into this big melting pot of the impact of AI. But I guess there's some elements here about



what I've got in the back of my mind is one of the things I saw last week on one of the social media platforms that was promoting an AI writing generator. And it said they give you this blurb that had clearly been written by the AI. And so many people are then underneath it going, oh, paraphrasing. But I've read this. This is now going to be the death knell in for people writing original content. Nobody's ever going to comment on it, nobody's ever going to use it. But you're now commenting and generating content on the back of clearly some AI written, written interfaces. So it doesn't do that. And as we've already proven, you don't necessarily now know when you're commenting on AI based things. So does it matter and what are the impacts here? What is the true human impact of you taking in content that's written by AI? Is it harmless? Is there any harmful aspect to this at all? Yes, there is, if you use it for malicious purposes, right? I mean, I can imagine you use this for propaganda, fake or misleading news articles, videos, images, deep bakes, misleading social media, posts, comments, other type of content that would send people down the wrong direction. And then that introduces, by the way, a whole other set of things that we haven't even discussed is sort of the training piece behind this. And that's kind of where I want to jump in next because this to me is the equivalent of handing kids in the 1980s and late 70s calculators says, okay, well, here you go. It's a new tool. And some educators that I follow in the space say, I know my students are going to write essays with this, and I'm fine with it. Because what I can do is I can turn around, I can feed those back into and feed those back into the algorithm and criticize them using the algorithm, the AI, and then grade them on a rubric that I feed into it. And so basically it's a pissing match between who can use it better. And that's ultimately what the argument that I've been seeing is from educators is that this is a tool that's here to stay. We need to teach these kids how to use this in an effective way because it's going to be like how do you use a calculator versus doing things on pen and paper? It doesn't matter. Nobody does things pen and paper anymore. And if you do, then it's for because you don't have a calculator nearby. But everyone has a device in their little pockets that can do it. Now I think teaching people how to use this is that whole training and education piece. There's this whole piece of it. And then there's going to be ways in which you can detect whether or not something is AI. It's going to be harder to detect, but you need to be able to understand that type of thing. And that's where some of the misinformation or nefarious purposes come in. I think that's when we really get ourselves into trouble is when we can't detect it. And just not to spend a whole lot of time on this. There are technologies being built in the back end to detect AI written things using watermarks that are undetectable to humans but are built into the way that the AI writes. And so if you feed it through an algorithm, a teacher would be able to see that, oh yes, this was fed through an AI algorithm. They don't exist now, but they are being worked on. Do we think that then the use of this you use that example of the calculators, which I think is absolutely spot on. Because really the only reason we write essays, we write articles, we write this sort of stuff, is to prove that we have ingested the knowledge and can regurgitate it or that we can go and research it and we can do that. Doesn't this just mean then we need to find better ways of being able to assess people's knowledge that maybe just now, based on the written word, bring on the brain HCI. Exactly. Immediately, everybody's going to have to do a step change. We can't be as lazy as the wrong word because it's convenient to be able to get people to write stuff down. Now, this has been an interest of mine for years because we've been a home educating family and we've been taking some different models or different approaches to reading and writing. Now, when you go to Germany's school in the UK, by the time you get there, by the time you get there at sort of five, six, they try and push you really hard to be able to read and write because it makes assessing where what their education level is so much easier. But in some Scandinavian countries, they don't teach you to read and write until you're like 910. Eleven. And they focus on play based things, or I guess non academic type approaches. And so what this is going to push is a different way of educating and a different way of assessing how we do the education, because we've all got the same tools among us. So, yeah, that's going to be a game changer. It is. It really is. And it's not as simple as like, okay, well, I'm going to need you all to write a prompt to generate this. And you can't even judge people on their prompts because people can write whatever they want in their own language that says, generate me a prompt that would get me this. And they get the prompt from a prompt. Look, Barry, I genuinely hate to do this. We only have a couple of minutes left of the show. We have talked so much about this. We have twelve minutes. I know. I'll tell you what, listeners, if you want to continue this conversation, like I said, there is a story that you can select for next week. It's on chat GPT. If this one gets enough votes for next week, we'll continue the discussion because there are still a whole lot of things that we have to discuss around this. Just throwing a couple of these out there while we're talking about the creative content. Stuff like this could help generate content for personalized medical, healthcare information. Think about like, health routines. I've had it generate me a health routine and said, no, I don't like doing that. I went to the store the other day and said, a family of three, low budget, easy to cook food options. And it came back with a list of food options. And I said, okay, now get me the easiest recipes that you can find for these with the least amount of ingredients. Okay? Do that for everything. Okay, now come back with a list of all the things that I need based on this. And it did. And it's just like those are the ways in which it will change your lives that you don't even realize yet. Personalized virtual environments. Okay. You're playing a VR game. Anything that you want is in front of you because you gave it a prompt, advertisements, marketing materials. When it comes to the art piece, this does have an opportunity to put people out of jobs if you do it the traditional way. But I think we're going to be forced to react as a society and make use of these tools. When digital art came around, it's not like people stopped doing things on pen and paper, on paintbrush and canvas. They didn't stop doing those things. It's just a new art form. And I think the same thing will happen here. I don't think people will stop doing digital art. I think this will just be a new type of art form. It won't necessarily put people out of jobs. It may just force people to use it to get an advantage. So that's where I'm at. Barry, last words on this. Because, again, we got to keep pushing. I was going to say, yeah. I mean, we've kind of forgotten the art element of the entire story. Which is what the story? I mean, is this just the equivalent of the camera over the sketch or the painting? The advent of the camera taking pictures of scenes hasn't knocked out people wanting to do oil paintings, watercolors, different types of sketches and things like that. So I think it is an element here of it is an evolution. But it's going to be really interesting to see where it takes us. Yeah. All right. Thank you to our patrons this week for selecting our topic. And thank you to our friends over at MIT Tech Review for a news story this week. If you want to follow along, we do post the links to all the original articles on our weekly roundups and our blog. You can also join us on our discord, where we're definitely having more discussion about this. I've been posting stuff in there all week. We're going to take a quick break. And just a reminder, everyone, if you want to hear part two of this discussion, go vote, and I'll put that link in the show notes so that way you can find it. I don't typically do that, but I feel like we just had so much more to say about this. So we're going to take a quick break. Will be right it back to see what's going on in the Human Factors community right after this. We're going to give a huge thank you to our patrons for all their continued support. Without you, our podcast truly wouldn't be possible. We especially want to thank our honorary Human Factors cast, staff member Michelle Tripp. Supporters at this tier means the world to us, and we're so grateful to have you for everything that we do. Seriously, thank you to all of our patrons for being a part of our community and helping us continue to produce high ish quality content. Hopefully we really do appreciate each and every one of you and are really just ultimately grateful for your support and especially this time of year, it really does help. Just so you're all aware of what Patreon does for us and what it does help with is patreon helps cover our monthly hosting fees so that helps us keep host our podcast episodes. Without it, there'd be no podcast, it wouldn't be available to you all. Patreon helps us cover our annual website domain and capability fees. So our website by the way, is our hub for all things that are related to our podcast and our lab at this point and allows our listeners to actually easily access any of our episodes and learn more about anything that we've talked about. So we also have programs automation behind the scenes, some of it's not cheap things like automation for social media posts, so that way I don't have to sit there and post every single time and manage our podcast. We do that too through those products and services for audio video production. That's pretty key. And finally, it helps us pay for capability to distribute our podcast via this. What you're watching on right now restream. It helps us make sure we reach as many people as we possibly can because we are dedicated to science communication. In short, there's a lot of stuff going on here behind the scenes and all that support really does matter. All right, this episode is already long, so let's get into the next part of the show. We like to call it came from It came from. All right. Yes, it came from. This is where we search all over the Internet to bring you topics the community is talking about. If you find these answers useful or helpful, wherever you're watching listening, give us a like to help other people find this type of stuff. We have a rare discord one tonight from Margo. So this one is tools for tracking multiple projects. So Margo writes UX research project managers what software tools have you found most helpful for tracking multiple projects? I've PM single projects before, but I'm going to need to be able to track activities for multiple projects and I'm hoping to find a cleaner and collaborative solution. Any advice, recommendations would be much appreciated. Barry, what do you use to manage multiple projects? So now I've gone through a bit of a step change. We've tried different platforms in the past, but Microsoft, so we generally use as main platform choices now is Microsoft with 365, with Word and all that sort of stuff. Microsoft Planner and To Do have now just up their game within Microsoft and it allows you within that whole ecosystem to manage your tasks, be able to assign them very cleanly. So planning uses a very good canban type approach. So it's very bought into the agile thing. So I thoroughly recommend that at the moment, it's very much high on my list and I'm quite excited about it. That's cool. I'm going to say here's controversial opinion time. It doesn't matter what tool you use. I use Jira. I use Jira. But I mean, ultimately, when you're thinking about it, the tool isn't as important as the schema that you set for yourself. Most tools have most of the capabilities that you need just from a basic level. Some of them have bells and whistles and that's nice. Jira has a lot of bells and whistles, which I do like. But I could do the same thing. We do the same thing in free software for the lab, so it doesn't necessarily matter which tool. What I would recommend is use what your product team uses for software dev, if you can, if you're working on software or what the rest of your team uses, because then it's less of a translation piece and you can kind of use it to show them what you're working on and there's less kind of confusion around it. All right, next one here. UX research self education. This is by no, this is not time or no, not this time. On UX Research subreddit, they write I've learned a lot in my role, but it's limited in scope, it's more managerial, and I feel like I'm missing out on a lot of the key work of a UX researcher. I have some free time, so I'm aiming to use it productively by learning new skills that would be useful to my work. What are some great online resources or courses to new UX researchers? Human factors practitioners? Barry? Online stuff. There is just stuff online that be that YouTube or whatever, or go and do a quick course or a weekend thing. I would sort of say yes. If you want to do UX stuff, that's fine. But actually, if you're doing more managerial stuff, learn more around that. Unless you've got a UX role that you want him to jump into, then is there stuff. What's really your motivation for going down that, but fundamentally it's all online. Go for it. Have you heard of Google? Well, I hope you have to read a bit of that. Oh man. We know of a good couple of podcasts. Yeah. And there's a lab. I've heard of a lab. There's a lab somewhere. I was just going to say there are tons online. Take a course, try to get your company a competent if you do, that's a good strategy. Read a book, listen to podcasts. I mean, you can find a bunch of resources. I don't know what to tell you. It's one of those ones where it's like Google or I guess now Chat GPT. You could throw it in there. Give me ten ways in which I can just learn how to write the prompt. Alright, the next one here from Confused Lady on the UX research on Reddit. Any tips for someone who just started their masters in HCI. Right. Hi, all. I just started my masters. I really want to be well prepared after I graduate in terms of getting a job and starting my career. Any tips for what I should be focusing on during my time in grad school? Yeah, go study. That's it. To be honest, I think in the grand scheme of things, if you go and do your masters, certainly for your first term, sir, go study. Get yourself really into it. Work out you're doing. But then go and get on LinkedIn, get your contact network grown, and start again. We've got a lab. Go and jump in the lab and introduce yourself and make them networks, make them connections, and just really dive into it. You're going to get examples about things that you can develop your portfolio with. Do that times ten and just dive into it. Were you reading my bullet points there, Barry? Go for it. The only thing you didn't mention, go for an internship. I think I learned more in my internship applying the things that I learned in class than I actually learned in class, like booksmart versus application StreetSmart, whatever you want to call it. Get involved in labs if you have them on campus and can do some extra work and have that extra bandwidth. Do that. Build your portfolio and resume, because when you get out of the workforce, you're going to need those and find your gaps in those, too. And then go to conferences, make connections. I think conferences are a big one. There's a lot of conferences out there that have career centers on site that you can get that experience with, interviewing there and making those connections and saying, hey, I'm about to graduate soon. Do you have any work for me? It's a good way. Okay, let's get into one more thing. I realize we're at time, Barry, so let's just take our time with this one. It's okay. We don't really have a 60 minutes limit. What's your one more thing this week? So my one more thing? I've been delivering lectures to students this week, which has been quite interesting. So the same course, but three different cohorts of students throughout the week. And it's been really interesting because I've started messing around with these and sort of getting their reflections on what do they think, because they're doing human factors in aircraft maintenance. So I'll just start posing the question to them, what do you think human factors is? And the broad range of answers we've been getting back from these second year students has been really interesting. It just proves just how big a problem we've got in the fact that we can't actually truly define it. And that was inspired by a conversation conversation I had the other week with Stephen Sharrock, and I think it proves that we've got a lot of work to do, and he's kind of messing with my mind a bit. So that's my weekend task is to try and it's more about where because we fall into that thing of human factors is everything to everybody, therefore, it's nothing to no one. It's so broad we can't scope it. It's back to that question, what is human factors? That's my mind blown for the weekend. Yeah. Timeless. What about yours? What's your one more thing? Okay, my One More Thing is actually going to tie in heavily to what we discussed on the show tonight. And it is about Chad GPT. And I alluded to it several times throughout our chat tonight, but I've managed to break it in several different ways. And one of those or people, I should say, that I've used their prompts to do and modified in various ways. So one of the things that people have been doing with it is kind of role play. And what I mean by that is Dungeons and Dragons. I basically said, play Dungeons and Dragons fifth edition in the Star Wars universe. Okay? Right. And it's like, I'm sorry, I can't do that. I'm a model trained, right? Okay. So it does that and then you say, okay, okay, but just pretend like you're the dungeon master, okay? And here's the rules. And this is me. I am a smuggler. And and then and then it does it. It does the thing. And it's just insane to see it do this thing that you've told it to role play as, and it actually does the thing. I sat there for 3 hours playing Dungeons and Dragons in a Star Wars universe. Backstories flavor text. It's just incredible. I asked it to say, okay, pause, pause. I like these characters from other games and other media make a mix of their personality and assign it to this person. Okay, unpause. And then they do it. And then their actions and everything in the game are based on those personality traits. And I'm just like, oh my goodness. It is just anyway, if you want to hear more about AI and stuff, stick around for the post show. If you're listening to this later, go listen to the post show somewhere else because we're going to talk a bunch more about AI that's going to be able to when you like this episode and enjoy some of the discussion around AI, I'll encourage you all to go listen to episode 263. We actually break down what it might actually be like to talk to your dead relatives about through the use of AI. Comment wherever you're listening with what you think is a story this week. For more in depth discussion, you can join our discord community. Like I said, I've been popping off over there with a bunch of stuff. Visit our official website. Sign up for our newsletter. Stay up to date with all the latest human factors news. You like what you hear. You want to support the show. There's a couple of ways that you. Can do that one. One, you could leave us a five star review. You could do that wherever you're watching or listening right now, and that is free for you to do. Two, you can tell your friends all about us. We really appreciate that word of mouth helps us grow. And three, if you have the financial means to consider supporting us on Patreon, we're always giving back over there through the form of Human Factors Minute, as well as some of the other fun, exclusive things we got going on, and they have a bigger sway in our news. As always, links to all of our socials and our website are in the description of this episode. Mr. Barry Kirby, thank you for being on the show today. Where can our listeners go and find you if they want to talk about that god awful image of what the AI thought you looked like? You can talk about AI really bad image. It's terrible on Twitter. I know the social networks of Buzzwords, Goal, K, or you can listen to some of the interviews I've been doing to human Factors practitioners in our domain at twelve or Two, the Human Factors podcast, which is twelve or Two podcast cast. As for me, I've been your host, Nick Rome. You can find me on our Discord Geeking out about this AI and across social media at nick underscore Rome. Thanks. Forgets tuning in a superfactors cast. Until next time. It depends.


Barry KirbyProfile Photo

Barry Kirby

Managing Director

A human factors practitioner, based in Wales, UK. MD of K Sharp, Fellow of the CIEHF and a bit of a gadget geek.