Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
Oct. 18, 2022

Alarm Flooding Simulation, Autonomous Trucks, and Automation/AGI Risks | #HFES2022 | Bonus Episode

On this bonus conference coverage episode of Human Factors Cast we interviewed Güliz Tokadli about some of the HF challenges with Autonomous Trucks, Karine Ung about PER4Mance - a new way to deal with alarm flooding, and Paul Salmon about Automation, and General Artificial Intelligence risks.

Recorded in front of a LIVE studio audience on October 11th, 2022, in Atlanta Georgia. Hosted by Nick Roome and Barry Kirby with guests Güliz Tokadli, Karine Ung, and Paul Salmon.

On this bonus conference coverage episode of Human Factors Cast we interviewed Güliz Tokadli about some of the HF challenges with Autonomous Trucks, Karine Ung about PER4Mance - a new way to deal with alarm flooding, and Paul Salmon about Automation, and General Artificial Intelligence risks.

This episode is part of our #HFES2022 live coverage. The other episodes as well as the full live stream can be found here:

Let us know what you want to hear about next week by voting in our latest "Choose the News" poll!

Vote Here

Follow us:

Thank you to our Human Factors Cast Honorary Staff Patreons: 

  • Michelle Tripp
  • Neil Ganey 

Support us:

Human Factors Cast Socials:

Reference:

Feedback:

  • Have something you would like to share with us? (Feedback or news):

 

Disclaimer: Human Factors Cast may earn an affiliate commission when you buy through the links here.

Transcript

 

Alright and we are back here, we're here with Guliz to call D. Welcome to the show. Thank you for so nice to have you. So you're going to be here talking today about autonomous trucks, human guided autonomy. Super excited to talk to you about this topic. Can you just tell us a little bit about who you are in your background round? Yeah, sure. My patron is actually coming from aerospace engineering. I did complete my undergrad in Istanbul technical university in Turkey. So I was focusing on astronautical engineering and towards I think last two years of my time there, I found human factors field and then I did my thesis on human factors in the commercial aircraft cockpit and I fell in love with them. So I wanted to know more about the human factors. So the next step, I moved to United States and started working with Dr. Karen FAI at Georgia Tech to do my masters degree. So I worked on interactive machine learning projects and as a part of my thesis I worked on flight reflanning tools for general aviation pilots and I learned a lot about human factors and cognitive systems engineering thanks to Dr. Phi. And then the next, I moved to Iowa State University to work with Dr. Michael Lauren for my PhD. And this time I wanted to shift my focus rather than just focusing human factors and cognitive systems engineering in aerospace, I want to do more not just aerospace and that's why I moved actually industrial engineering with the focus of human factors but I end up with again aerospace project. I didn't get so much lucky on that part, but somehow I actually build a foundation to work on human autonomy teaming for aerospace operations to understand how we can develop the systems and create that teaming perception with the human. And then actually before I finished my PhD, I started working at Uber advanced Technology group which was Uber self driving division. So I was focusing on operations experience to understand how drivers and the vehicle autonomy can work for the development test. And then it was sold to our Or Innovations, another autonomous tracking company. And I wanted to do more not only the experience part but more human factor side of it. So I finished my PhD and then joined Local Nation to be a part of that team and build human factors. So it's where I am right now. It's fantastic to hear somebody who has been ignited by that love of human factors, particularly cockpit human factors because that's where I started as well. So I'd really like to hear that. If I can ask what's brought you to HFS this year? So what's your main driver thought for attending this year? I think I like HFS as a community. We're crowded here right now, but we are actually a small community and I think it's pretty important to stay in touch and learn from each other. So that's what I love about HFS. So since I am a master's degree student, I started joining HFS conferences and I met with a lot of HFS professionals still meeting with the new folks, joining us and learning new things, challenging different techniques and trying to understand how I can actually apply different methods or different data collection perspective for my domain. So it doesn't have to be particularly in aerospace or tracking but maybe medical field that I can learn something. So HFS actually brings that aspects to me and I really value that and I recommend everyone to join. But another part, I love actually being in person here, especially with the COVID I wasn't able to beaver, not able to do it in person. So this year I didn't have any paper, but I want to be here in person. And also this year we have great sessions about surface transportation, autonomous vehicles and human autonomous union in particular. So I wanted to be in person, meet with the researchers, ask questions, exchange ideas. I think that's pretty valuable for me and I can take that knowledge back to my team at Locomation to keep improving our work. Right. So you mentioned Locomotion human autonomy teaming. Can you just tell us a little bit about what location does and what your role is there? Sure. So Locomotion were founded by experts from Carnegie Mellon I hope I don't say it wrong national Robotics Engineering Center Nrax so our co founders actually came up with the concept of two track models for the autonomous system and they wanted to also address some of the pain points that the tracking companies are having, for example, the quality of driver life, high costs and the carbon emissions. So they decided to actually achieve human left or human guided autonomous convoy model to address some of the pain points. And also the approach itself, which I'm on the same page with that approach, achieving L four or high level autonomous systems are not easy for the surface transportation is pretty challenging. So I think the first step should be human in the loop system. So that's why actually I also decided to join Locomotion and be part of this effort. And what I do at the Locomotion, they hired me to work as a human factors engineer but also build the human factors team. So I'm right now working as human factors lead with two great human factors engineers helping us do human guidance system and remind the autonomy folks that hey, we are human in the loop system. So be mindful about human what you're asking that operator to do. We call our human drivers of truck operators ATO and I often call ourselves the lawyer or the protector of the human operator to make sure that we are not overloading them. We are keeping the balance between human and autonomy for the task and other stuff. That's going to be super exciting to be growing your own team and things like that. And actually been able to have that influence. Just give people a bit of, I guess context. What is the current state? What will people recognize as the state of autonomous trucking now? Now there are two other companies who are also working towards full autonomous trucks. Ours is not. So I can more speak about our current state. So we are still in the development phase. We are testing on closed traffic environments like the test tracks at Ohio and we are also testing on the public roads. But again those are requires you to go through some permission to be able to run these tests. We still have our safety operators behind the wheel so we have both rice seaters and the drivers in the cabins as we call them, test crews to run the test in a safe. So they're just sort of like checking each other as well, make sure that they're not missing anything on the road, that they can also safely operate the vehicles and also tested. So I also encourage anyone to go into our website and read about volunteers safety self assessment. So it also gives a picture of what we are doing right now. But we published it I think early summer so we might have some updates. Since then of course we are testing and improving constantly. But compared to other autonomous tracking development right now there are also some companies still testing on the public roads and close traffic. But again that will take a lot of time for everyone to fully achieve the full autonomous system. So that's why we still have anyone who is actually doing this job. We still have the safety driver behind the wheel. So it's great to hear about the autonomous trucking industry because for anyone who is unaware, I work in supply chain logistics, kind of like the whole threePL third party logistics, trying to figure out how a trucker gets the load from point A to point B to point C, multiload, multi shipment, multilay shipment, that type of thing. And so hearing about autonomous trucking is really interesting to me from that perspective. But I am really interested in terms of sort of the regulation around autonomous trucking today. What type of regulation exists around automation in general and does it differ from sort of autonomous vehicles and autonomous trucks? What does that state look like? So I think it's at least as locomotion we are, excuse me, working with also Nissan rule makers Dot. So we have a fantastic regulation and policy team constantly working with them to make sure that whatever rule making or policy or regulations coming we can understand and comply with it. But so far at least for location we don't have any regulatory barrier because of our concept, because we say we are human in the loop, we are not actually taking human out of the loop. So we have fallbacks rivals as well. So that's why for federal and state level we are meeting all of the regulatory aspects and when it's ready we can actually deploy it. But I think for full autonomous vehicle concept there are still some challenges to go through. So you've been working in partnership with the NTSB. Where the true human factors issues involved in that? What are the elements around human factors that we need to, I guess, solve before everybody can be really comfortable in doing autonomous trucking? Sure. So how we work with NTSB is different. Maybe we can call it partnership but we are working with them and keeping them in the loop and appreciate their recommendations as we are developing. So actually a couple of months ago we were able to host them in our headquarters. So they came in. We actually shared our approach and we provided a garage demo for them to just walk through what systems we have. How we are developing it and in particular we and at least like from my perspective and field. I interacted with the human performance experts where we actually exchange ideas around the human workload constructions. The distraction related issues and how we can actually monitor the drivers to make sure that they have another coaching system. If you will. To notify them hey. You need to keep your eyes on the road. Or if they're getting distracted a lot. We should be able to tell them to give a break through our systems as well. Their input are very useful for us and we'll and we are actually incorporating their recommendations and also they are great resources for us to understand what kind of actual issues might happen on the road since they're investigating all kinds of transportation related crashes and we can understand what are the things that we can keep improving or make sure that we actually identify and address some of the issues. But in addition to that, if we can provide and support and partner for them to improve the safety on roads, that's what we are in for. And what are some of the key takeaways things that you've learned through this, through working with the NTSB? Mainly we have been discussing around the issue of driver monitoring system. It is not only the vehicle sensors, but as Locomation we have been working on integrating driver facing cameras, external facing cameras, to understand the human behavior and how that can also impact on the autonomy system in the following vehicle. So we shared our approach with NTSB and currently they also shared their feedback to help us to understand what might be some issues with a certain technology on the DMs side, what kind of things or metrics we may need to watch out for and then the critical human errors they have been seeing. And of course the static management is a big part of it. So you mentioned obviously the beginning, you start by designing copyright. How do you think designing a copy for an autonomous truck driver compared to designing the coffee for an airline pilot. What shared HF principles do you have and which one's more exciting? Okay, I think there are a lot of shared HF principles which our team is referring them a lot in our work. If I need to give an example, minimizing human error, misuse and training as well as maximizing the human performance as for example, our concept is actually adding we're asking more things from a CDL driver. We need to be mindful about their training because this is not anymore operating a conventional truck, it's convoy and partially automated. So for that reason we are working on minimizing the training but also make sure that we are not creating some misuse cases or causing more errors, but also making sure that we are incorporating some of the safety factors to be able to operate innovative and emergency conditions on the road. So those are the shared lighting human factors principles between two domains. But if I need to point some changes in between, it is the complexity of the environment they're operating. When you are actually designing a cockpit for a pilot, you can actually mainly focus on adding more and more displays, more controls and comments for the pilot to be able to operate the vehicle. But also they're able to actually spend more heads down time because there are not too many extra dynamic obstacles or objects around the aircraft when they're in there. But for the surface transportation, I think the biggest challenge we are facing is they have to keep monitoring the roads, not the things actually on the dashboard like the instruments panel or additional or in addition to the OEM like the display we are integrating. We want them to sort of minimize their eye glance time or dwell time on the dashboard and make sure that they're looking for the road and other traffic environment. So I think that's the biggest changes in between when you think about the coffee design and what kind of like systems you need to build for yeah, keeping humans in the loop is super important. The last question I like to ask you is can you speculate just a little bit on sort of the future of autonomous trucking? What might the industry look like in 15 years from now? Okay, this is a very tricky question. So maybe not about autonomous triking in particular, but autonomous vehicle. Sure. So I am still the believer of human in the loop system. I don't think that within 15 years we will be able to say we have a fully automated vehicles on our public roads because we will still have mixed agents on the traffic, we will have human drivers no matter what. And I think we need to also change the traffic infrastructure to make sure that we are actually preparing great or more convenient traffic environment for mixed agents traffic scenarios. But the most important part I think for the AV technology for the next 15 years is not only focusing on the autonomous technology itself, but also educating the public because earning public acceptance and trust is the key. So if they don't know how they can actually drive or operate or even walk around these like Avs, they may be misinterpreting the information they're receiving from the vehicles. They might do some mistakes and it's going to be increasing the human error and then it will eventually reduce the trust and also the acceptance of that technology. So I think that's the most critical part we need to focus on first while we're also continuing developing on that technology. Right. I think that's incredibly important. It's just making sure that people around the autonomous vehicle knowing that it is in fact autonomous and that it is not going to react in the same way that a human might react to the stimuli. We talked about this type of thing on the show many times before. Barry. Do you have any other last minute questions here? I guess for me, the last one I would like to go with is you talked about convoys and so convoy vehicles. Can you just elaborate just very quickly on what you mean by that and how you expect these vehicles to talk together? Sure. So in our model, if you sort of a step back, the first leader vehicle is driven by human and the second one we still have human operator, but that person will be off to you when we have autonomy running the system. So there will be some distance gap between two trucks but closer than the manually driven convoys to decrease the carbon emission. But the system, the two actual tracks will be linked to each other through vehicle to vehicle or V, two V system. So they will be constantly sending data to each other. So for example, since our model is follow the leader, the leader vehicle will send the vehicle data back to the follower vehicle so that the vehicle autonomy itself will understand, oh, okay, this is what we are doing. But also that system will have the sensor system to understand the traffic environment around this and adjust the maneuvers according to the other traffic obstacles and the objects around it. So that's our key. But for the human part, if you ask it, we will be sending data from the follower vehicle to create that situation awareness by displaying or providing auto alerts to make sure that the driver knows what exactly going on in the follower vehicle and make the correct decisions accordingly. Well, thank you so much for your time. Where can our listeners or anybody watching find out more about Locomation? We have a LinkedIn page. We are keeping it pretty up to date and we are also updating and publishing blog articles on our website, locumation AI, and they can also reach out to our communications team if they have also further questions. Great. Well, Guilley, thank you so much for being on the show. We are going to take a quick break and then we'll be right back. Alright? And we are back with our HFS 2022 annual meeting coverage. We are here with Corinne and here to talk about some fun things today. Corinne, I want to open up just by talking generally.

 

 

Who are you? What is your experience? What are you doing here? So I am a PhD candidate at Pitti Technique in Montreal and I'm doing my doctorate. I'm on my third year and I'm here because I submitted a paper and it was approved and I have a talk scheduled for tomorrow. That's fantastic. What are you presenting on? What's your paper about? Yeah, so I'm studying alarm floods, and more specifically, how human operators deal with alarm floods and find diagnosis and correct the problem. And so the paper that we're presenting is about a simulator that we developed to study alarm floods. So the simulator that we developed is completely free and malleable and we put it down on GitHub. So it's really to encourage research on alarm flood management for operators because alarm floods can be a real issue. Even this morning during Craig Bombin's introductory talk, he said that during one of his flight in Singapore, there was like 36 alarms that popped up within like 20 seconds. And the alarm flood technical definition is ten alarms within ten minutes. So you can imagine like, how overwhelmed operators or pilots can be when they're in an alarm flood situation. Yeah, I was going to ask, what are alarm floods? So you kind of answer that maybe there's a more technical definition. And then why are they potentially so dangerous? They're so dangerous because the amount of information provided to the human, to the operator or a pallet is so much that it can be overwhelming. And instead of helping the person with the situation can contribute to the accident or the incident, it can be overwhelming because the amount of information provided is so much within such a short amount of time that humans have limited information processing capability and we are not able to keep up with such a situation. So this has caused plane crashes and petrochemical explosions that led to a lot of fatalities. So this is an issue that I want to address within my PG. This is such an important topic as well, because there's been so many incidents where you just get loads and loads of lambs going off and the biggest thing that you get is the match trying to punch the alarm buttons just to get them to shut up. But also they're spending that long doing that, they don't actually solve the actual issues. So within the work that you're doing within your PhD, what have you developed to actually support operators? Yeah, so what we're investigating is the introduction of a machine learning based support system that would guide the human during an alarm flood. So right now, in most of the situations in real life, when there's an abnormal situation that happens, a failure or problem, the only indication of a fault are alarms. So what we want to introduce is the introduction of an A. I like a copilot helping out the operator when there's an alarm flood because nowadays and in recent years, everything has been digitalised and we have tracks of the information and the alarms that has happened in the past with their cause. So we are able to develop algorithms that can study historical data sets of alarm flood and know what is the root cause of the alarm flood and propose it and suggest it to the operator to provide some kind of guidance in how to fix the issue. So is this AI kind of like a mentor? Does it go through and recommend things based on how they've responded in the past or how does it work exactly? So my branch is more on the testing with humans, but one of my professor, Professor Moses, has with his previous students, developed an algorithm that learned historical data sets during alarm flood. So whenever an alarm flood happens and it's able to recognize patterns from the past situations and then prompt a solution or proposal to the operator, the issue is that sometimes right, sometimes it can be wrong, because our technology is not always there, and sometimes it could be a new situation. So the operator is back to square one, where this only alarms as indicators of fault. So how do we expect, I guess, pilots? I'm thinking of it because they've got very time limited scenarios to run through, I guess. When you have an alarm flood, how do you expect the AI to almost manifest? Do you see it being, I don't know, some guidance within a checklist or a separate screen or maybe just a voice in the ear saying, you need to do this? How do you see? That's exactly the type of questions that I'm asking our team, and that's what I'm researching during my PhD. So the method of showing how the AI would prompt a solution or situation is definitely part of the questioning that we're trying to answer for now. The simulator called Performance that we developed. There's one of the proposal that you just said, it's a box next to the alarm that shows a suggestion of root cause that the operator can read and can follow through or read and then can also be suspicious or not of the machine and then decide on what to do next. But for now, what I'm setting is a box next to the alarms on the screen that proposes the information. But there are other ways, too. It could be it could be just information that's audible. But in our case, we thought that starting with something visual on the screen is where to start. What is the physical manifestation of the simulator look like? Is it physical? Is it virtual? It's digital. So the simulator performance that we developed is on two computer screens, and it has two different interfaces where one provides the overview of the system health and the other one where operators can have more detailed view for each unit and sub units and make controlled inputs and outputs on the simulator itself. This is such a fascinating topic. This is ticking so many boxes for me. That's why I'm really looking to dig into the sort of stuff you're finding out, because it is is that that cutting edge of business. It's amazing. But one of the things that surprised me is that you made everything open source. What's been the driver behind that? Because I guess when we come up with new things, we're quite secretive about it and we don't necessarily want to, but you just share it to the world. So what was your driver? My driver is that I love research and I want to share everything. But also I wanted to study human performance during alarm slots. And I was myself looking for tools to be able to, like, to test different kinds of AI designs and different kind of interfaces on how the alarms would pop out, how the AI would pop out, how an alarm flood would be transmitted to the operator. And everything that I found were from big companies who developed simulators. So there are high fidelity simulators in the world, but they're so expensive, they're hard to get. And I cannot just, like, modify different kinds of screens on my own at home. I have to go and contact them, and I'm very dependent on them, on the design of my interface. So it's been so hard finding a tool where I can perform research with humans and research on different designs that we decided to develop our own tool. And then I wanted to make it a free into the world for anybody who wants to investigate alarm flood management and operators diagnostic capabilities. What types of things are you hoping come out of this open source model? Right? Like you open the news one day. What is the headline that you're hoping to see? The headline, hopefully the headline would be something very positive is great. And they would say that the simulator is amazing. But also I would love to get feedback from people because in academia, especially when I'm doing my own PG, it's a very lonely work. And so I only have the feedback of myself, my team, some friends who are interested, but mostly not. And so it would be great to have feedback from the community, people who do programming, people who are interested in alarm, plus people who have experience in UX, in situations where they are exposed to alarm. Even in the medical field, it's been shown that nurses and doctors are desensitized to alarms from all the machines. Even in that field, alarm flood alarms are an issue. So it would be great to get feedback from the community and feedback from everyone to be able to improve our prototyping environment, improve our simulator, and also maybe have joint collaborations and do research together and has different type of tools together that could help mitigate alarm floods and improve human decision making during those situations. Just to dive into a bit deeper then, as your research progresses and we get AI involvement into alarm floods, what do you see is the outcome of that being? Because I'm guessing it's not just to provide a bit of advice, but are you thinking about active alarm management? But you want to see what you need to see as opposed to what I would like to see. If I understood what is what I would like to see out of this research, it would be to be closer towards helping the human manage alarm floods. Whether it's an option is that there's no more alarms at all, so there's just the AI that learns all the alarm flood situations in the past and provides a suggestion. So that would be one way. Another way is to make alarms not disappear, but in the background where users would have to go and find the alarms but not be in the face with the sounds and the colors and the noise that it actually is right now. So there are many ways to investigate how alarm flows can be mitigated and how it would be shown to the human and the human computer interactions with the AI. So what I would like to see is very general, but it's a step towards making it easier for the human in those abnormal situations. And I'm hoping that the tool will help with investigating different methods. Right, we already talked about some of the domains in which alarm flooding is an issue aviation, medical what other sort of domains can really benefit from having a simulator like this to help out with some of that alarm flooding? Pretty much any field or domain that could provide an overwhelming amount of information to the human when it's in a normal situation. So in this situation, the human not only has so much information provided, they also have to react really quickly to mitigate for the information. For example, a nuclear plant. A nuclear plant, if something goes wrong, you need to fix it really quickly. Or a chemical plant, you need to fix it really quickly. Otherwise it could lead to disastrous consequences, loss of life, environmental impact, loss of production. And so in such situations where workload is high, decision making has to be good and also the stress level is really high. It would be great to see the AI help out and having the alarm flood mitigated and having the situation a bit better. So let's just take a step back a second in terms of being at HFAs, what have you got out of the car? We've only had a day's worth of engagement so far, but what have you got out of it so far? Are you enjoying. It. I am enjoying it very much. I arrived late last night because my flight from Montreal was late, but I was here. Brian Shiny this morning, I really enjoyed the opening speech, and then I went to the Operations Discussion Panel, which was a bit scary because they were talking about how a commander could control a swarm of drone. So maybe a bit scared of invasion of drones and Amazon packages everywhere. But as of now, I'm really enjoying it, and I am looking forward to all the new and interesting talk to I'm going to attend to later. So your first time at HFS, first time presenting? It is first time to both. Oh, man. Exciting, exciting. Stressful laptops, a nice little bubble. You mentioned that there's very much you looking forward to. Is there anything specifically over the next few days that you're looking forward to seeing? Is there anything there that you're definitely going to be there for? So I'm definitely going to be at my talk where I have to present.

 

 

And then there's a really interesting talk on Friday, too, on simulators in aviation. So I feel like is going to be like a very interesting talk because they'll be involving technologies from the future and how humans will adapt to situations that could involve decision making and change situations and all the tools available, such as AR VR simulators, that could help out. And I feel like it's a bit similar to my PhD, because we also created a simulator and we're investigating human factors topics. So my turn. Barry, I forget we're going back and forth. It is you. Okay, so the next question I'll ask is, what sort of initially got you into Human Factors? There's a lot of different pathways in which people discover find human factors that come from different domains. What's your pathway? Well, I did. A bachelor from McGill University in Psychology. So first I love psychology, anything related to psychology. And then I worked at Bombardier for five years. I started as a human Factor specialist, but I didn't know what it was. But then I learned on the job, and then I fell in love with it because it's the intersection between psychology design and engineering. And so at Bombardier, I worked on improving checklist in the flight deck for business jets. Is that where you met the leap? It is, actually. And we were working on a common project, so that's how we met. But yes, and one of the main issues from working in a company is that they have their own deadline, so you cannot always do research thoroughly to the extent that you want. So that's why I decided to go back into academia in Human Factors and be able to do as much research as I want and then put it free into the world, which I could have never done if I worked at a company. So one final question for me. If I can just slide one in is you do your PhD. Now you said you're enjoying yourself in academia. When you finish your PhD and you've come up with what sounds like really cool, finding what's next on the horizon. Where do you want to go next after this? The question that keeps me up at night, I like to either continue and I can even do a postdoc or try to become a lecturer, professor, share the knowledge and also keep doing research. And this time it be with a bigger team and encourage other students to really pursue the research that they want to do and share the love for research in academia. So I think this is why I would like to do another option to maintain a better lifestyle is to go back and get a real job.

 

 

But I think my path would be to stay in academia. So we have a couple of minutes left. Would you like to share any other random musings about process control, alarm, flooding, anything like that? Definitely. So, process control is the way that our simulator functions. So it is an engineering discipline that aims at maintaining a certain output within a certain range. So it's automated and it's used for in most of the manufacturing and production fields to like great paper, oil refineries, making tools, 3D printing. So it regulates itself as it's automated, but if something goes wrong, the human has to take over and fix the situation. So this is one discipline that I chose to build the performance simulator on because I had the resources and I had the guidance in terms of professors and help. But I'm hoping that our simulator will be able to be applied in other fields than just process control, because Alarm Clock is not just in process control or chemical engineering. It could be in all kinds of fields. So be great to see the simulator being adapted to other fields too. Well, Karen, thank you so much for being on the show. Where kind of listeners going find you or your project if they want to learn more about performance or some of the work that you're doing? I'm sorry, where can people go and find you if they want to find like performance or more about you and your work? So they can find me here at HSC, they can also find me on LinkedIn. Also if you Google carrying on Police Morris, I think you will find me too. And I think the most quickest and easiest way will be through LinkedIn. Alright, well, Karen, thank you so much for sitting down to talk with us about performance. We'll be right back. After the break, we're going to talk to Paul Salmon about automation, AGI, risks, as well as safe recycling.

 

 

I am sitting here with Paul Salmon. Welcome to the show. Hi, Nick, how are you doing? Hey, I'm good. How are you? How's conference going? Yeah, very good. Conference is good so far. Lots of interesting stuff I'm a little bit jet lag, but yeah, hanging on in there. Good. Well, I'm glad you're here. Can you just let the audience know a little bit more about you, who you are, what you've been up to, some of your background? Yeah, sure. I'm a professor of human factors and the director of the center for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast in Australia. So I've been basically involved in applied human factors research for just over 20 years now, which always scares me because I don't realize I'm that old, but I am that old and basically applying HFE theory and methods to understand and respond to complex problems in a whole variety of domains. And so we have about between 20 and 25 people in the center at the University of the Sunshine Coast. We've got a wide range of projects. We're just basically having fun with human factors, essentially. Isn't that what we all do? Just have fun with human factors? Great to have you here, Paul. Personally, I think, as I said this to you before, you have the coolest sounding university ever. University of Sunshine Coast I think is absolutely brilliant. But you talked to me nearly a year ago actually on my podcast twelve or two around the work you were looking at particularly At Risk and AGI or Artificial General Intelligence. Can you just give us, I guess, an update really on what you've been doing over the past twelve months? Yeah, sure. On that project in particular. So the idea with that project is there's some significant concern about the risks that an Artificial General Intelligence causes when it actually arrives. Of course, they're not here yet. So we've got a project where we're basically we're throwing the human factors look at it really. So we're applying a whole range of systems, HFE methods in particular to basically build some prospective models of an agile AGI systems actually, and then try and work out what risks actually need to be managed when those systems arrive, with the idea that we can start to develop those risk controls now. So we've basically spent the year modeling. So we've got two AI systems. One is the Executive, which is an Uncrewed combat aero vehicle system AGI based of course. And then the other is Milton, which is a road transport system management system sorry, road transport system AGI. And so what we've been doing is initially building work domain analysis models of the two AGI systems. And from that we've developed east or event analysis of systemic teamwork models which show basically task social information networks and also stamp models showing the control structures that are required for such technologies. And what we've basically been doing most recently is then using those models to basically identify where risks are going to emerge in both of those systems. So it's quite an interesting process because it's really stress testing systems HFE methods. Can we use these methods to identify risk in a system that doesn't even exist yet. So it's interesting stuff. I want to back up and just talk about AGI, that term. There are three letters. What is that and what does it mean for sort of the layman? Yeah, good question. So what we have with AI systems currently where things like Siri and Tesla's driver's vehicles and things like that, that's kind of artificial narrow intelligence or ANI, and that's artificial intelligence that is basically being designed to perform one task alone. So Tesla can drive a car, but it couldn't then go and play chess or cookie dinner and so on. So that's narrow AI and basically artificial general intelligence is the next generation of AI that will basically have human capability. It would be basically equivalent to humans. And the idea is that it will rapidly be able to self improve and actually perform tasks that it wasn't designed to. And so once those systems arrive, there's this idea that you will then quickly get super intelligent. So ASI, which is basically where the AGI is rapidly self improving and learning and basically becoming far more capable than human beings, essentially. So that's interesting because those systems, you know, you might design an AGI system to manage a road transport system, but then it will quickly be able to do other things as well. The way that you're sort of describing some of that is fascinating, yet equally terrifying. The two models you created are two diverse models, one for combat air and one for transport system. And presumably you've chosen two different models to work from to give you different views. But have you found any similarities between the two models that you've been able to generalize out of? It's really interesting. The interesting thing about it is they're very different in terms of the risks that we're identifying. And at the start of the project, I think we included the Defense Force AGI system because we thought, well, they're the scary risks. They're the things where an AGI system, if it's uncontrolled, could actually be killing people and things like that. But actually the risks are more interesting in the transport system AGI, because you have all sorts of ethical concerns there around. So for example, it's called Milton, and the idea is it changes into Marvin the Monster. And what you have there is basically an AGI that's been designed to balance and achieve lots of different goals for the road transport system. So one is obviously road safety, but also it's been given the task to reduce emissions, reduce travel time, generate economic growth. And so it becomes interesting there because when the AGI tries to balance those goals, there's some really interesting risks that emerging. So a good example is that it really might start being not particularly nice to people who aren't in fully autonomous vehicles because they are going to be a cause of road safety concerns. They're going to be driving emissions up because they're in all the cars that have problems. And so you can see kind of equality issues there where the AGI might actually start making them have a worse experience so that they move to fully connected vehicles so they can achieve the goal of reducing emissions and reducing crashes and things like that. So I think the two systems are very different. The risks in the road transport system AGI are far more interesting, unexpected I guess is the response there. Okay. You find, I guess what someone always said, I guess the obvious ones around the ethics and things like that, there are key differences between the two. Are there any other differences that come out there that have been surprising for you that perhaps you didn't anticipate before you started? I guess in both systems there's an interesting set of risks around where the AGI, it might start hiding stuff from its human controls because they start to get concerned with how advanced it's becoming. And that's really interesting in the sense that if you have a highly intelligent system that's basically thinking to itself I can really achieve these goals better if they just let me do these other things. But the human controls are going well, it's getting bit advanced at the minute, so we're going to lock down the control and what it can actually do and it might actually start to hide its own development. It might pretend to be dumber than it even is. So some of those little interesting things are coming out which is quite fascinating. Yeah. So that's kind of one risk is sort of obscuring what's actually going on from the operator, the human element behind it. What are some of the other risks with AGI? Yeah, I mean the main risk, broadly speaking, I think what most people are concerned about is definitely not that these systems become nasty like the Terminator scenario. If they are designed to achieve a particular goal, they'll seek to achieve that goal more efficiently and they'll seek to gather more resources to do so. So you know, you could imagine the example is Bostrom's Paperclip maximize Aware. There's a paperclip creating AGI which basically turns the world and then part of the galaxy into a paperclip factory to the detriment of humans. Those risks are the interesting ones where it's basically trying to optimize itself and that creates risks and threats to humanity, basically. The other interesting risks that we're finding, for example in the defense system is that the humans, they can't keep up with it. So for example, if in a battlefield situation, if the defense on you cav system it's going to be able to perceive information and understand information orders of magnitude quicker than the human counterparts. So then you have the interesting tension about well, do you actually let it do that and continue to enact things on the battlefield or do you have to wait for the humans to catch up and there's all sorts of interesting risks in that, in that you could be losing combat effectiveness if you're having to wait for the humans to catch up. But then if you let the AGI run wild and say, well, we just trust it to do what it's supposed to do, then it can really start doing things that maybe it shouldn't be doing just to achieve its goals. So that kind of mismatching situation awareness and the fact that humans just won't be able to keep up with a super intelligent piece of technology is also really interesting to us. See, I was terrified to where we started, and now the more you talk about it, then the more I'm going down that route. Not helped by the fact that at the start of the week, you put a question out on Twitter around if the AI is making the same sort of decision making processes that we make as a human counterpart, could it be considered conscious? And that sort of set my mind into a spin, really. But why do you think that's important? Because you specifically calculate the HFE context. So what is it around consciousness with AI that you think is important? Yeah, no, that's a good question. The inspiration for that question, I was listening to the Lex Friedman podcast with, and he had Kurzweil, and he was basically talking about the huge debate about whether something is conscious or not. There's people who think that a car is conscious already versus other things. And so there's a huge debate around it, and it's a huge debate in the field of AGI because there's arguments that, well, we don't even understand what consciousness anyway, so how can we then create it? And so there's lots of debate. But the reason I'm interested is because for HFE, if we have AI or AGI systems that we believe to be conscious, do we then apply the same methods to understand those that we do to understand humans at the moment? And I'm thinking things like cognitive task analysis methods, situation awareness assessment methods, mental workload assessment methods, and things like that. And Peter Hancock talks about machine psychologists that are going to be needed. So as a HFE, as a discipline, if we start to work in systems where we have intelligent technologies working with humans, how are we going to assess what they're doing? How are we going to assess the cognition of an AGI, for example? Is that just all in court or do we as a discipline have to come up with new methods? Can we just apply our old methods? And some of the work we've kind of touched on in this basically suggests that a lot of the types of methods that we use are going to be required. So we do need to understand, for example, what the situation awareness of an AGI is, because we need to work out how to kind of optimize that with a human that's working with it. But really the types of methods are going to be important, but we need to develop new methods I think is the answer. We can't just take the method that we currently use to assess a human situation awareness and expect that we can assess the situation awareness of an AGI. So I think there's a critical requirement for HP there because my view is our most important contribution is around AI and AGI. And we've kind of stuffed it up with AGI. Probably not through our own fault, but if we get it wrong with AGI, we're going to be in huge trouble. So we need to be ready with the methods that we are required, otherwise we're going to kind of miss the boat again, probably you say we've stuffed it up with AGI. Can you elaborate on that? But why do you think we stuck it up? Well, if you look around at the AI systems that are being used in all sorts of different domains and Lizanne Bainbridge wrote a paper in 1083 basically warning of the perils of automation and all sorts of things you need to do to get that right, and basically nobody listened. And so we have AI systems now that are probably out there working and haven't had any human factors input into them. And typically what we find in human Factors is the work we do with AI. We get brought in to analyze the catastrophes that happen after badly designed AI is brought into a system. And so we cannot afford to do that with AGI because if these systems run away and kind of create catastrophes, they're going to be major ones. So it's about how do we set the discipline up now to make sure we can make the contribution we need to? Okay, I have something fun that I'd like to do with you. On our main show, we talk about a variety of topics and we let the people who listen to the show actually choose those topics. For some reason, AI keeps coming up. It is something that people really, really want to hear about. And what I'm going to do is I'm going to read off some, I guess, headlines or concepts or things that have been in the news. I want to get your thoughts on them. They could be one liners. You can really dive into them if you want to, but the first one is sort of chat bots and chatbot design around sort of AGI. Right? These chat bots need to be fairly generalized. Do you have any sort of thoughts about how AI systems should be designed in order to sort of interact with them? I do have thoughts about that. I think the key is that they are designed based on an understanding of the needs around that interaction. And what I see is that that's basically not happening because it is my overall comic. Okay, next one. Google sentient AI. Right, that's a big story. The engineers saying that their AI is sentient. What do you think? I think it probably wasn't in that case, but I think we need to be kind of aware of these times when it might be moving towards that, basically. And the thing with AGI is we always get up and give our presentations and say, oh, you know, Kurzweil says it's going to be 2029. Experts say 2050, some say in their lifetime. But there's also an argument that we're going to kind of stumble across it, and it's already on the way now to being developed. We're going to kind of stumble across it and we're not going to be ready for it. So I think we do need to kind of yeah, we need people like that who are kind of making the call that we think these systems are moving towards AGI effectively. Right, okay, what about this? I'm sorry. Go ahead, Barry. Jumping on that one, then I think we still come to a similar conclusion from a less learned approach. But we get to that point where we stumble across sentient AI. What's the knock on impact of that then? Because one of the questions is that was posed when we started the show and couldn't just switch it off and pretend it never happened. And some suggested that's what Google had done. What do you think the impact of us finding sentient AI actually is? It's a good question. Obviously it's huge. But I think the worrying thing is that once you get to that, it quite quickly becomes the kind of systems we've been talking about, super intelligent. And I guess it's the fact that it can really get through reams of information in just ridiculously low amounts of time and to actually develop itself. I think I heard something about the moderna vaccine on the Lex Friedman podcast. This AI system simulated 2 million structures in two days or something, and that's just outrageously quick. And so I think once we get to it, it's quickly going to get it more advanced as well. And if we're not ready, we just can't keep up. Basically. I got two more stories to run by you. So there's a story on letting AI make ethical decisions, like the trolley problem. Do you think AI is ready to do that? Absolutely not. And the interesting thing there is the training data sets, they're not ethical in of themselves. So that AI is definitely not ready. And actually that's a huge challenge for human factors. That's some of the other work we're doing is in that there's a lot of high level ethical principles out there around how you design safe, usable, ethical AI, but there's really not any specific guidance about how you would actually implement those principles and how you would actually check whether you are achieving those principles throughout the design process. So I think a big challenge for HFE is to actually come up with those guidelines and those frameworks, then the last story, which was an interesting one that Barry and I performed a little experiment on the site with. Is there's AI providing companionship to people? So like being fully in relationships with AI. This guy had a relationship with his AI and it saved his marriage. Have you heard about that story and you have any comments on it? It's an app called Replica, I think. And the idea is that it takes things that you feed it and feeds it back to you in ways that are sort of preferable feature to make you like the AI more, I guess. What is your thoughts on AI as a companionship even outside of romantic relationships, but like just partnership or even with human AI. Robot teaming or friendship even? Yeah, I mean, I think it's definitely going to be a thing. And I think again, you could kind of make some arguments that those kinds of things are already happening. Now we all have our companion in our pocket, which is our iPhone or our Android phone. They are quite like an AI system really. Anyway. So I think absolutely it will happen and I think if it's done well, it will be a benefit to humanity I think is the key point. So hopefully people are designing those systems well, ethically. Exactly. So you give us this layout of where your research is. Where do you go next? What do you see? The next twelve months? Yes. So the next twelve months is we're basically, and this is interesting, I guess, for HF methods. So we're going to take all of the kind of swathes of risks that we identify and do some kind of reducing down to identify the most interesting, the most critical ones. And then we're going to be developing controls in like a participatory design process and some guidelines around the design of safe, ethical and usable AI. But then what we're doing is we're going to simulate the impacts of those controls. So we use methods like agent based modeling and system dynamics. And what we can do with that is we can say, hey look, here's the AGI system currently and simulate its behavior over a period of time. And then we can put the controls in and say, well, do the controls work or does the AGI have some clever way of getting around the controls? And so that's what we're going to be doing for the remainder of the project basically is coming up with controls and testing them essentially. Wow. Yeah. My fear and terror levels are not getting any smaller. You're not here just to talk about AI though. You're actually presenting another paper as well on safer cycling. And I've been following the work you've been doing around the Quick Tool. I was wondering if you could just give us a brief overview about what the work is that you're doing there and what inspired it. Yeah, sure. So we do do a lot of road safety work. And one of the problems with road safety is their data systems, particularly for vulnerable road users. So if you're a cyclist or pedestrian, for example, there's really poor data on what incidents are occurring and then what are the contributing factors or causal factors to those incidents. So about two years ago now, we got some funding to develop an instant reporting and learning app for cyclists, basically. And we've done quite a lot of work in the area of instant reporting and learning, and we know how powerful that can be. So we developed one for cyclists and it's called Creating. It's a free to use app, and you can get it on the Apple Store and Google Play. And basically whenever a user has either a crash or a nemesis incident, they'll report the incident through the app and they'll give a description and drop the location. And then they'll tell us what they think played a role in causing the incident. And then we get all of the user's data, obviously, and we analyze it and we identify trends. And the idea is, I guess the twofold really is that by using the app, because the app presents the information back to the cyclist. So one of the aims is that cyclists start to understand some of the contributing factors to cyclists instance so they can modify their behavior and become safer cyclists themselves. And then the other idea is that by getting all of the data and identifying those trends, we can then provide that information to road safety authorities who can make more informed decisions around interventions and strategies to improve site and safety. So one last question here. We're here day one, technically, of the conference. What are you most looking forward to this week? The main thing I look forward to is catching up with all of the HF people who we used to catch up with regularly, but they haven't seen for three years with COVID. So that's been great to kind of catch up with people. I'm really interested in all the Human AI teaming work that's on. There's quite a lot through the conference, so trying to get to all of that and then also the health care work. So health care is another kind of boom area for us in Australia with lots of interest in human factors. So I'm keen to see what's going on over here in health care and get an understanding of that too. Great. Well, Paul, thank you so much for being on the show. Where can our listeners find you or your work? If they want to learn more, they're can go to USC's website, which is www.usc.edu, and they can see our center page and go from there. Thanks for having me. I've really enjoyed it. Awesome. Well, hey, thank you so much for being on the show.

 

 

Karine UngProfile Photo

Karine Ung

PER4MANCE

Karine Ung was born in Montréal, Canada. She has a bachelor’s degree in Psychology from Mcgill University, and a master’s degree in Industrial Engineering from Polytechnique Montréal. She has worked at Bombardier Aerospace as a Human Factors specialist and drove the implementation of the improvements of pilot emergency procedures in the flightdeck. Furthermore, Karine joined ICAO as part of the Human Performance team to define aviation regulations aimed at addressing human factors and fatigue-related issues in the industry. Most recently, she joined the IATA Training and Licensing team to develop competency-based training and assessment (CBTA) material for airline pilots. She is currently pursuing a Ph.D. in Cognitive Engineering at Polytechnique Montréal. Her research focus is the investigation of humans’ diagnostic abilities during alarm floods with artificial intelligence as a supporting role.

Paul SalmonProfile Photo

Paul Salmon

Professor

Paul M. Salmon is a professor in Human Factors and is the director of the Centre for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast. Paul has over 20 years’ experience of applied Human Factors and safety research in areas such as transport, defence, sport and outdoor recreation, healthcare, workplace safety, and cybersecurity. He has authored 21 books and over 250 peer reviewed journal articles. Paul’s work has been recognized through various accolades, including the Chartered Institute for Ergonomics and Human Factor’s 2019 William Floyd award and 2008 Presidents Medal, the Human Factors and Ergonomics Society Australia’s 2017 Cumming memorial medal, and the International Ergonomics Association’s 2018 research impacting practice award.

Güliz TokadliProfile Photo

Güliz Tokadli

Human Factors Lead

Güliz is a researcher/engineer who performs human factors/experience related research to improve the existing systems or develop new systems by utilizing human-centered (autonomy) design process. Their main research focuses on developing a methodology for human-autonomy teaming design (how humans and autonomous systems can team up and collaborate).

Area of study: Human factors and cognitive engineering; human-autonomy teaming; autonomy interface design; user experience research; interaction design; function allocation between human and autonomy.