Hello and welcome to our Human Capital Watch on what is the future of generative AI in human capital. My name is Matt Rosenbaum and I am a principal researcher here in the Human Capital Center at the Conference Board. I'm also joined by my co-author Matt Maloof, who is another researcher in the Human Capital Center here. Before we dive in, a little bit of housekeeping for you all. We are going to be able to give you all continue education credits through for HRCI, SHERM, and CPE. If you want to get those, just click the icon picture to sign up to receive credits and then respond to the attendance verification pop ups throughout the program. I also want to note that we have AQ and a box for you all to ask questions. So please, if you have any questions about the presentation, feel free to ask them there. There's also an attendee chat which you can use to talk with each other. But please for those questions do use that Q and A box specifically. We also want to note that there is ways for you all to react and share your thoughts with us through the emoji reactions, which are down in the little sidebar at the bottom of your screen gives you a chance to give us a thumbs up. Well, hopefully a thumbs up, but you can also give us a thumbs down if you want and lets us know what are the most salient points for you all. But as we go back and look at this content, so with that aside, let's get into what are the some of the critical questions we're going to be talking about today. First, we're going to be looking at how are organizations using generative AI in human capital? And then we're going to be looking at what are the key generative AI related challenges that organizations are facing. And then finally, what factors will shape the future usage of AI? Before we dive into just a brief point about the methodology behind this report. This is coming from interviews with technologists and HR leaders at large enterprises that were conducted in September and October of last year. So we talked to a total of 17 different people at 15 different organizations and use those conversations to generate these insights, which we're going to be talking about with you all. Before we dive into the content or the insights directed from that research, I want to highlight a few main points that I think set the overall context for this report. First and foremost is that generative AI uses permeate the employee experience. So whether it be from increased personalization for recruiting to new candidates to the drafting performance management reviews or even looking at exit interview analysis and, and being able to discern trends and what people are saying as they leave the organization. Generative AI can be used throughout the employee experience. And those uses are only going to continue to proliferate as the technology develops, as people uncover new uses and innovation compounds on and builds on each other. We also think that generative AI is going to increase the expectations for HR and change the way that HR does its work in a few key ways. So those common themes that we saw throughout our conversations were one, through enhanced personalization. So again, going back to that initial outreach to candidates, but also the learning pathways that are being developed of the course that people are being recommended or even the content within those courses, all of that can be personalized using generative AI. We also expect that there will be an increase in or people expect an increase in the speed of improvements and the speed of solutions being developed because there's again, much more quicker insights to be gained through the analysis using these tools, but also new ways to implement and apply those insights very quickly using generative AI. And we think that's going to move HR to more of providing proactive support rather than reacting to what people are doing. People expect quicker responses and and more proactive solutions through the enhanced productivity that these tools can can bring to the table. But also because of the opportunity to use that freed up time to focus on improving the play experience or do the work that HR does more efficiently and in a better way. So I'll share an example that we heard through one of our interviews with a technologist that a company that does learning and development programming. And they were talking to one of their clients who brought in their leaders and were asking you, how are you going to use generative AI? What are you going to be able to use it for? How are you going to be more productive? People said, oh, there's, you know, a laundry list of things that we can do more quickly. And they're all excited about the various use cases that they're suggesting. And then the leader said, great, what are you going to do with the time that you're freeing up by using these tools and being more productive and just met with silence. So I think it's important to recognize, but there's there's two opportunities here. There's the opportunity to do what you're currently doing more efficiently, yes, but there's also the opportunity to do so much more now that you have that freed up time. And so the goal is to be able to get there as best you can. So with that, we're going to turn over to Matt Maloof Talk about some of the key insights that we've generated from this report. Matt. Thanks, Matt. So with that, with that stage set for us, these are the five insects that we gleaned from the report. But overall, the the most important theme was the critical role that leaders played in modeling usage, developing governance, encouraging innovation and developing their workforce. And so we're going to go to a quick poll question here and you can see, you can see how knowledgeable do you think your organization's leaders are about innovations in AI? Not knowledgeable, Somewhat knowledgeable. Knowledgeable. We're very knowledgeable. So we just want you all to take a moment, fill in what you think and that'll help us understand what your leaders are facing and and where you think your leadership is in terms of understanding the cutting edge of AI. Give it a few more moments, Matt. What do you think most people are going to be saying? Without, without spoiling too much, because we, we want to see their answer, I, I think there's going to be some alignment with the responses and like the perspectives of the people that we interviewed. I think that I would be very surprised to see people at like the ends of the spectrum as opposed to somewhere, somewhere in the middle. You know, if, if very knowledgeable or not knowledgeable at all. Our, our, our largest responses. I think that would be a little surprising. So makes sense. Let's see what we got. Yeah, you're. Right, just just like you thought. Actually, I'm actually really surprised the the not knowledgeable being 0%. I thought there were maybe a couple in there, but the somewhat knowledgeable and knowledgeable by far I think being the largest response options chosen. Yeah, it's absolutely encouraging. So our research our research highlighted several ways that HR leaders set the tone for how their enterprise can use AI. Being knowledgeable is just, is just one piece of that puzzle. The first one is that they have to model understanding and, and their use of, of, of generative AI. It can't just be for their teams. It sort of needs to be an enterprise wide usage. Everybody should be sort of encouraged to, to, to participate in this conversation. The second was that there needs to be a culture of continuous learning. The AI landscape is rapidly changing and advancing, and organizations shouldn't feel like they're they're that they're stagnated or comfortable at one specific level. They sort of need to be prepared to learn more as they're going and continue learning moving forward. The third was encouraging experimentation. There needs to be a culture of being able to try new things and see what they can like learn from those new things and share what went well and what didn't, what what didn't go well. Without that, leaders aren't going to necessarily be able to glean like the most direct insights and information from their teams that are using generative AI and their day-to-day sort of manage their tap, manage their tasks and interact with it directly. And the fourth was that it was overseeing training. But that sort of lends itself well to the modelling usage and, and understanding generative AI. If leaders aren't modelling usage and using it, they're not going to necessarily know what the training should look like. And they also aren't going to be able to ask their teams what they need help with generative AI for if they're not allowing for experimentation. So these all sort of mesh together and, and, and work in conjunction. One really creative way that that that we learned from from our interviews about how organizations can, can help leaders with, with generative AI is, is to participate in AI Rd. shows. And So what is an AI Rd. show? Really? It sounds just as it sounds just like the name says, going from teams of leaders to leaders and sort of showing them what generative AI can do. And in that interview, they sort of, they sort of walked us through what that that roadshow would look like. And, and they started with trying to connect to those leaders on a personal level, right, Rather than saying this is, you know, the direct benefit for the organization they started with, this is the benefit for for you all. And then from there, once they were personally invested, they moved on to this is how it's going to benefit the organization. And I think that sort of journey on bringing your audience along with you helped emphasize that their organization and they as people are, are well equipped to use generative AI, so long as they're putting in the effort to sort of try and do so. And then what was definitely the largest, we could say concern or or fear that that arose in the interviews is this, this conversation around augmenting or replacing work workers. So organizations are likely they're they're likely to land somewhere in the middle of of that augment or replace workers conversation. But how far they are to one side of that spectrum really defend depends on what sort of behaviors, incentives and priorities are are placed on the two the two different sides of the coin. And one of the interviews we had, they shared this really great metaphor for for thinking about this. And it was, do you want to do 130% of the current work with 100% of your workforce? Would you rather do 100% of the work that you're doing with 70% of your your workforce? Obviously, this metaphor is, is, is a simplification, but it captures the like the core issue. Well. Are you going to be leaning towards one? Are you going to be somewhere in the middle? And how much you lean will depend on what you prioritize. And now I'll hand it back over to Matt to talk to you all a little bit about governance. Thanks so much, Matt. One of the things that stood out to us when we were talking to leaders was the two main challenges that they talked about consistently were governance and how to create effective governance for AI throughout the organization, but also how to encourage adoption. And what stood out to us is that those are intertwined challenges because at the end of the day, the goal of effective governance should be to make people feel comfortable using these tools and set up the guardrail so they can use it well. So I think the both of these tie into the underlying challenge of how do we get people to use generative AI tools well, and we'll talk a little bit about that. But before we do that, we have another question for you all. Want to hear from you in the audience. How confident are you that your organization's AI governance equips your organization well for the future? So yes, you may have a, you know, an AI governance that you think is pretty strong right now, but how well do you think it sets your organization up for the future? Are you not confident, somewhat confident, confident or very confident? So I'll take a few moments here, let you all put in your answers. I think here I'm hoping that we'll see people again, Matt, at least in that somewhat confident. I have a more renewed faith after the last poll because I think maybe some more, you know, some of these organizations are a bit further along than we might have thought. But I am curious to see how people think government is set up for the future because one of the points we'll make in this section is just what works now is not necessarily going to work tomorrow, I think. I think that's why it's really hard to sort of be very confident in this because unless you're, you know, knowing exactly what's going to be coming down and sort of the AI development landscape, having that level of like strong confidence, you know, doesn't sound too, too, too easy. But you know, maybe we'll be that'll be the category that everybody sort of lands. Yeah. All right. Let's go ahead and see what people said. OK. So there are some people in that very confident category, which given the difficulty of forecast in the future, you know, Matt, I think that you pointed out that's an interesting place to be in and and something that I think is, is impressive. I hope those people can share maybe a little bit of what they're doing and what sets themselves up so well for the success, but more people in that not confident or somewhat confident category for sure. So I think it's something that continues to be a pain point for at least organizations we talked to in our interview and and something that will be important going forward. So we want to highlight kind of the two different ends of the spectrum that you see when it comes to governance. On the one hand, you run the risk of having governance that is too loose, and that's bad for a few reasons, one of which is the obvious risks of misuse, whether intentional or unintentional. Again, most of the errors are probably going to come from unintentional misuse, but both can be devastating. And the other danger is that that not only harms your brand or affects your organization or your consumers or your workers, but also requires, you know, stronger crackdowns that can prevent future misuse, but then also limit the value you're getting from these tools in the first place. On the other hand, governance that's too strict is going to potentially limit access or discourage people from using tools that have available. And so you're missing out on the value that could be gained from using these tools well, because people are either too afraid or just hesitant to use what they have access to, or they just don't even have access in the 1st place. So again, going back to this point of the goal of effective governance is to to balance those two, but also to ultimately ensure that people are actually using these tools. Their value to be gained doesn't come from these tools just existing and no one uses them. You have to actually use them in order to get the value from from them as an organization. So ultimately, I think the goal, sometimes it can be perceived as though the goal of good governance is to reduce risk, but one of the risks that needs to be reduced is losing out on all that value that could be gained from using these tools well. So the goal I think should be more how do we make sure that people are using these tools as effectively as we can. One of the major challenges that people kept bringing up to us was related to data and the integration of back end systems, particularly in situations where you may have different platforms across different regions or different business units or different subsidiaries. And that is an ongoing and very difficult challenge. Companies been working on cleaning up their data architecture for years, but I think what stood out to us is that as challenging as that may be, the technology and the data component is actually easier than the people in the process side. And this, you know, hopefully will make sense to HR leaders and us and HR folks. But for other functions, it can sometimes be difficult to to grasp just how hard it is to deal with that people side. Especially whether it be making people feel comfortable using these tools, creating the training to help them know how to use these tools well, but also organizing the processes that you have in place to account for these new capabilities. Because yes, you can improve what you're currently doing, but the real value comes from rethinking what you do in the first place in order to account for what these tools can can offer. And I think that process is going to be ongoing. It's not something that you can just do once or, you know, set up. It's some all of these are components that are going to need to continuously be maintained and updated. And good governance requires effective cross functional relationships and expertise with especially legal IT and risk. All those different people are going to be involved and have a say and how these tools are used and what looks like good use. So it's important to set those relationships up from the beginning and and ensure that everyone has a say in what's going on. From the organizations that we talked to, the kind of two main approaches that they took basically centered on restricting access to either the tools, certain set of tools or certain sets of people within the organization. Often those are people that have gone through AI training or responsible AI training and, you know, can hopefully be trusted to better understand these tools and use them well. But there's also centralized review. So for instance, one that came up frequently was some sort of committee of experts that evaluates potential applications that people are suggesting as new ways to use these tools. So you may have an internal GPT collection or something of that nature. And it's important to have at least some safeguards to indicate, yes, we're going to put our resources behind this and you know, promote it to other people in the organization or this is something that we should probably, you know, extend back and have the people develop further before we expand the access to it. So one of the important things that we've talked about a little bit, but I want to reiterate is the need for any good governance strategy to be able to adapt to the future. I think again, what works today may not work tomorrow. I can almost assure that it won't actually, because the landscape is changing, whether it be the underlying technology itself, the models themselves change, the ways you can apply them change, excuse me, but also the regulatory landscape, what people are not allowed to do in different municipalities or different regions, and all the also the attitudes of your. Workers or society at large towards using these tools are going to change over time. So that's going to change what you want to account for in your governance. So one example of how to do this in an ongoing fashion comes from Microsoft to employs the National Institute of Standards and Technologies AI risk management framework, which basically centers on 4 main components. 1 is govern, 2 map, 3 manage or measure and then four manage. And the initial governed step is simply creating the policies, setting up the accountabilities, making sure that you have stakeholders aligned on what is or is not allowed to be done. The map portion is I looking for risks and potential risks that you need to account for and then identifying ways to potentially mitigate those risks. And so measure entails to looking at those identified risks and then seeing how much are these actually affecting us and how effectively are we mitigating those risks. And that's something that's going to again, have to be ongoing and adapt as the environment changes. And that's part of the manage component, which is again, just looking at the mitigation strategies you're putting into place. Do we need to update them? Do we need to change something about our governance framework in order to account for new risks that are coming into the forefront? And so that's something that it's a cyclical pattern. It's something that is never static, it's always going to adapt and change over time, but those four components are essential, I think to to any sort of good governance framework moving forward. Also want to highlight the points we were making about encouraging adoption and how that integrates with the governance framework. I think here training is a key component of both. So not only do you want to train people in order to make sure that they can feel comfortable using these tools, that they know how to use them well, but also so they know the guardrails, they know the do's and don'ts of using these tools. And they can feel more confident and more at ease using these tools knowing that they're staying within the safe boundaries that's determined by your organization. So we think of workers should at least get some basic training on functionality. What can these tools do proper use? You know, what can you use them for? What should you not use them for? And effective prompting, at least for now, that remains a an important skill to have to understand how to get the right responses in the best outputs that you can from these tools. That may change over time, but for now at least it's still important and we think of good training approaches as a tea. Basically, you want a broad scale training across the organization for everyone to know how to use these tools on some basic level and just understand at least some bare minimum of of what to do, what not to do, how to think about using these tools, how to incorporate them into the work effectively. But then there's also deeper training to be done for whether it be specific roles or specific areas of interest that can help people get beyond that basic level and go into into real depth and understand how to use these tools really effectively, whether that be in their existing work or to do something new, whether that be creating new products or new services. Again, here it's important for leaders to model what that looks like for their workers. I think it's hard to say this is an important priority if the leaders themselves are not actually doing it because that sends a mixed message to everyone around them. So again, it's something that may be difficult for some leaders. They may not be comfortable with it. They may not see the value, but if they want other people in the organization to take it seriously, it's important for them to set that tone and and show that example to the rest of the group. And then another way that they can also help their cause here is to recognize people who are using these tools in innovative ways and creating long term value. So whether that be just shout outs and recognition or also rewards, it's important to to point to the good examples and reinforce this is the behavior that we're looking for. One of the HR leaders we spoke with compared this situation to basically being 3 year old sitting in front of a car. And we can, you know, have we have the keys in our hand and we can make the horn beep or we can open the door. But there's so much more of the car can do that we don't even know about yet. We don't even understand. And I think that's something that's going to be needs to be taken into account in both the governance and the training component is even the real experts don't know a lot of the things that we can do with these tools. We are at the very beginning of a marathon. It this is not a, a Sprint by any means. And so it's something that needs to be accounted for and under help people understand how to deal with that ambiguity in that unknown. So now we're we've talked a little bit about what is currently facing leaders and that some of the challenges that they're dealing with. But I think it's important to talk about what the some of the factors that are going to affect the future of how general values used in HR. And for that, I'm going to turn over to Matt talk about the AI divide. Matt, take it away. Thank you, Matt. No, our leaders in our interview sort of mentioned this, this, this gap. Before we jump into that, a quick poll question. So which of the following are you prioritizing? Attracting and retaining talent that is comfortable using and innovating with AI, or training existing talent that is not yet comfortable using and innovating with AI? Let's give people a few moments to answer. I'm, I think I'm curious to see if focus, if people are interested in attracting that of talent or if they're trying to beef up their existing workforce. I mean, I think, you know, hopefully you're doing both, but I'm curious to see what people are are prioritizing. And and neither is neither is going to be an easy, easy thing to do. I think they're both going to require a decent amount of effort. Yeah. I think it's it's going to be a challenge and we'll we'll talk about that in just a second. But let's go ahead and see the results. Oh wow, so training existing talent that is not yet comfortable using and, and innovating with, with AI. You know, I thought this was going to be a lot more of a 5050 split. I'm surprised to see it being sort of 2575. Yeah, that's interesting. So in our, in our, in our, in our interviews, the terms AIAI migrants and AI natives were sort of coined and, and the idea was that an AI migrant was a person who was not yet used to using generative AI and they're trying to, to learn and, and, and, and get, get to a more knowledgeable place. And then AI natives were, were people who have integrated Gen. AI into their day-to-day, whether it be work or their personal lives, and are, and are used to sort of using it and being comfortable around it. Leaders are sort of going to have to navigate these two different populations and, and, and they're going to sort of create an environment that stimulates and encourages AI natives while supporting and training AI migrants and ensuring inclusivity in, in those training programs is super important so that people feel comfortable. Those, those AI migrants feel comfortable trying new things, asking questions, figuring out what they can and cannot do. And those AI natives don't feel like they're being stifled and, and can only sort of move in One Direction. They can sort of experiment and, and, and really challenge what they are they already know and grow their knowledge. And so it's important to frame this in the idea that a few years ago, the vast majority of the workforce was not, you know, they were not using generative, they weren't used to it. They weren't like it wasn't in their day-to-day. And now they're rap, it's rapidly changing and they are being forced to sort of become accustomed and adapting to it. And it's going to come to a point where people are just so used to using generative AI that they would feel lost without it if they weren't able to sort of use it and, and have it in their day today. And it's only going to become exacerbated in future years when future generations have joined the workforce and are becoming even more accustomed to to generative AAI. And I think just to highlight that point, Matt, I think one of the things that leaders are going to have to prepare for is it's not going to be a smooth or a linear change. They're going to be pockets. There's going to be jagged edges in which groups are comfortable using these tools, which groups are not, which groups continue to resist using them. You know, people have differing opinions on the value that these tools can bring and the potential risks that are associated with using them. So you made the point of the other might be people who are just not comfortable operating without these tools. Well, that's something that at least I see consistent push back against some people saying, well, why would we want to bring this in, you know, future into the into being. So I think it's it's not just going to be this linear, gradual shift. It's going to be a lot of jagged edges, and leaders are going to have to figure out how to bridge that gap, as you were saying. Now I'll hand it back over to Matt and and he's going to talk a little bit about expectations and a is impact on productivity. Thanks, Matt. So as we just highlighted with the gap in the, you know, there are different groups of different comfort levels and that will only continue to be exacerbated as as time goes on. But there's also going to be different expectations perhaps across those different groups. But more broadly, there's different expectations from employees and workers and different expectations from leaders. And you know, I think the the ability to integrate generative AI and the value you can get from that is going to hinge on how well leaders can can balance those, those different situations. So on the one hand, I think we're going to see continued pushes from workers to have access to these tools and to be up skilled and how to use them. So one of the HR leaders that we spoke with highlighted that no one in an interview currently asks if they're going to have access to Microsoft Office or Google Workspace or, you know, these basic software's that we assume are essential and everyone is going to have access to or that we will have access to in our job. And the point that this person was making is that there's going to be a suite of Gen. AI tools that are just as ubiquitous. It's not clear what those are going to be. It will change over time, again, at the early stages, but the access to those tools is going to be expected. And I think what may be something that can set organizations apart right now, giving access to cutting edge tools, giving people the freedom, creating a culture in which they're encouraged to experiment, share and push forward and innovate is soon going to be more and more just expected as a baseline. And so it's something that workers or leaders need to understand that workers are going to have those new expectations, but also leaders need to make sure that they are setting their own realistic expectations when it comes to the productivity gain. So this is something that we heard throughout our interviews, and it's also highlighted in our C-Suite outlook data, which just came out. The report just came out for 2025, looking at what do CE OS and other C-Suite executives are thinking about for 2025 and what they're most concerned about. And one of the things that stood out to us at least was are you asking about AI and what are some of the benefits that leaders are hoping to get? CE OS highlight? By far the number one thing was productivity gains. And one of the leaders that we talked to during this interview pointed to that as an area of concern because she thinks that there is a growing friction or misalignment between people's expectations of what these tools can do and how quickly they can be incorporated. But also the speed with which that can be done and and the level of impact of these tools will have. Because again, going back to that point about adoption, it takes time, it takes training, it takes effort to get people just to use these tools in their day-to-day, let alone use these tools well. And so, but some she and some of the other leaders we spoke with were seeing were leaders who consistently have higher expectations that we'll see quick gains. And, you know, if we give access to these tools, we'll see the effects quickly. And that's just not the case. I think if anything, you can expect maybe some productivity declines before it increases just because people are getting used to using these tools and finding out new ways of working and how to incorporate these in the best possible way. But that takes time and effort and can be messy at times. And I think it's important for leaders to set realistic expectations, so but also ensuring that they're getting the productivity gains that they hope to get from using these tools. But it's not something that's going to come quickly. It's not something that's going to come easily. And this goes back to the point about why it's so important for leaders to be using these tools. And I think it's partially just to give that first hand experience of what using these tools can entail and how it can help in some ways and be difficult in others. And it's not as smooth, easy experience. It's something that takes time and getting used to. And the hope is that through using these tools personally, leaders can better understand some of those edges and some of that that shape of how these things are developing throughout the workforce and what they can and cannot do and better understand what they can expect from workers who are using these tools and what they may have more difficulty with. And I think again, going and finding that balance is going to be tricky. It's not going to be easy, but it's something that's really important to do. Otherwise, you're going to have a situation where not only you spending millions of dollars on the actual access to these tools, implementations, you know, all the other resources and time that's put into actually integrating these tools, but you don't want to do that and then have unrealistic expectations lead to people thinking, oh, well, that was just a huge waste. And so I think it's important to to understand what that is going to look like and be prepared for it to take maybe longer than you might expect. I also think one thing that stood out to us that one of our the leaders that we talked with the highlighted is the need to make sure that you're comparing people fairly when it comes to performance and looking at performance for people who have access to these tools or people who are proficient in using these tools compared to people who don't have access to these tools or are not as comfortable yet. You know, I think you at some point you can expect people to become more proficient and that can be an expectation placed on workers. But if you're not giving workers access to these tools and you're comparing performance across groups with who do have access and those who don't, then you run into trouble. So I think it's important to, when considering performance and setting standards and expectations to look at what tools they have access to and, and how well can we actually reasonably expect them to use it. So those are some ongoing challenges. I think again are going to be exacerbated as some people continue to speed ahead in terms of incorporating these tools into their work and other people may lag behind or have some resistance or hesitancy to overcome first before they can start to use these tools effectively. So now we're going to move on to the big one, talking about job displacement fears and and I'll turn it over to Matt for the last one. Thank you. So going to our next slide here you can see we have two two graphs. The Ralph on the left was was looking at respondents being being asked how confident are you that your organization will provide the upskilling and reskilling you need to do. The following to the left being be able to use AI effectively in your work. And the right was being take on new tasks and responsibilities as others allocated to AI. And you can see that their answer choices range from highly confident, confident, somewhere confident to not confident. If you out of the two left bars in each graph, you can get the confidence. And you can see that less than half of the workers that responded, at least half the workers responded were confident or extremely confident that their organization will provide essential upscaling and re skilling for AI. And it's important to note that this data is from the middle of last year and it's not static. So naturally it will, it will change over, it will change over time and it may have changed till now. However, it's an important trend to sort of take notice of at the moment. If it continues to move in that same direction and confidence in their organization's ability to provide that upskilling and reskilling for AI continues to decrease it. It creates a room for a real risk for organizations. And it could realistically lead to a workforce that that feels ill equipped to do, to do the work that they're being asked to do, which would lead to, in theory, a decrease in morale, a decrease in retention and possibly a decrease in productivity. That decrease in morale would sort of come from the fear of their job being being replaced and sort of growing and, and, and sort of inhibiting their ability to do their work. That retention, that retention risk can come from, as we talked a little bit earlier about AIAI native, sort of having this be something that they're comfortable with and want to use and, and table stakes for them. If they're not sort of being given the tools at the organization that they're currently at, it would be natural for them to sort of look for another organization that will give them access to the tools that they want to use. And so it, it, it's an important, it's an important moment to recognize. Now, something important that isn't sort of communicated in these charts, but to take, to take, to take stock of is that this sort of revolves itself around communication. Leaders sort of need to communicate these training programs and, and develop these training programs with those, those workers so they can better understand what are the pain points for them and, and what are they hoping to sort of learn and get out of those programs And how could that help align with their professional aspirations? If, if that person wants to get better with, with, with Jen AI and sort of develop their skills, they can do it at the organization, so long as they're sort of working with them to, to do so. And in turn, that would in theory reduce the, the, the fear of job displacement and their work being taken away. And so here we have another, another sort of point from, from our survey research. This one's coming from the question that which of the following are the two most important priorities for, for HC leaders to help enterprise, help the enterprise capitalizing the benefits of AI. And so we asked this, we asked this question twice at the middle of last year to HC leaders. Then we asked it again at the start at the end of the year to the Chr Rose specifically. And you can see here that there was alignment and what their top priority was, which was modelling experimentation with AI pilots and use cases in human Capital Management or functions. But more interestingly, there was also alignment in what their least important priority was, which was implementing and implementing rescaling strategies for jobs that have a high probability for at least 20 percent, 25% of current tasks to be taken over by AI. And so if these trends were to continue moving forward in, in, in the next couple of years, organizations would have to find themselves in a really tricky spot of sort of understanding, Well, how are we going to rescale our workforces now if we didn't spend the time to do so when everyone was sort of at the jump, at the jumping off point and, and starting point for, for this transformation? Yeah. And and Matt, just to piggyback on that, I want to highlight the poll results from earlier where people, you know, 7525 people were prioritizing encouraging existed their existing workforce to, to use these tools and helping upskill them. So I think it's, it's something that if you're go, if people are going to take the approach primarily of we have our current workforce, let's train them how to use these tools, then this sort of reskilling prioritization I think needs to to go up pretty dramatically. And it may be that, yeah, there's a variety of different situations going on in each organization. Everyone has their unique context. But going back to our report in August from August about workers attitudes towards learning and just how they were expecting their organizations to train them or not train them for AI. One of the things that stood out to us was the low level of confidence that their organization's company company provided resource learning resources could equip these people to reskill into either slightly different or very different roles. And it was relatively low confidence for slightly different roles, but for very different roles. If you have people who need to move from, you know, into from HR into a different function, for instance, or workers were I think about 30% of workers were confident or very confident that they could actually make that transition using their company provided resources. So it's something that, again, as Matt mentioned, that last chart is something that changes over time. People's opinions of whether or not the organizations will upskill and rescale them over time is going to ebb and flow, but it's something that is going to be important for a couple different reasons, one of which is 1. You want these people to actually be able to use their tools effectively, but also the effect on well-being and people's commitment to the organization that comes from being part of an organization where they feel maybe more at risk or they feel at ease and say, yes, my organization is going to be able to prepare me for whatever is coming. One of the leaders that we interviewed basically highlighted what he thinks is a major issue in that he thinks a lot of organizations are basically lying when they say, oh, AI won't take your job, but people using AI might take your job. And he was saying that that's us. That's just being dishonest or is going to feel dishonest to people if these technology trends continue and people actually do start losing tasks or losing even their entire job to some of these tools that can take on work that was previously being done by humans. And so his suggestion was to say, we may not be able to keep your job, but we will give you the tools that you need to be competitive in the, you know, moving forward in the work marketplace. To me, I think both approaches are flawed because if the, I think it's important to be honest about the risk that is being presented and that people are facing from these tools, maybe not in their current form, but the progress and development is advancing rapidly. And also to recognize that to be honest about organizations intentions and whether or not they're going to say, yes, we're going to commit to upskilling you, to reskilling you, to giving you every opportunity to continue to do work at our organization, just maybe doing something different. Or if they're going to be again, going back to that augment versus replace debate, which side are you going to tip the scales towards as leaders? As you know, people who decide what is prioritized, what behaviors are incentivized. I think that's why we keep harping on that. Point is you all decide kind of how this effects workers and society at large and large swaths by making those decisions and building out the infrastructure for things like upscaling and rescaling, which right now, as we're, you know, seeing this chart here is maybe not getting the level of attention that at least we think it it warrants. So that's something that's wanted to share from our interviews that stood out to us. So with that said, want to just highlight a few of the recommendations that we have for people coming out of this research. And I also want to highlight there's a couple of questions I see in the Q and a box, but if you have questions for us, please let us know, put in your question there and we'll be happy to talk about them in just a second. But before we do that, just want to highlight four main recommendations that we have. Organizations 1 is the importance of developing AI literacy within HR and for the HR teams. I think it's something that is important for HR to be able to understand these tools and to know how they're affecting both the HR function, but also the wider enterprise and workers across the organization in order to to understand what we need to do as an HR organization to support those workers moving forward and support the enterprise. And it's, you know, transition to whatever may come next. And that starts at home, as it were. And it's important to to make sure that the HR teams know what's going on and understand these technologies in order to be able to do their work more effectively. It's also important again, for HR leaders to be part of that conversation around AI governance and ethical standards. The majority of the effects in the potential risks of these tools come in the relation to working with people, how they affect people. And so I think HR has a duty and A and a place to make sure that they are protecting their workers, their organization as best they can from some of those risks. While also in promoting people using these tools while, and ensuring that they're getting the most out of these tools, whether that be through, you know, training the governance, but also the training and the things that we talked about previously. And then related to that, the need to promote this culture of continuous learning. I think it's important to not only for HR leaders to model what it looks like to use these tools and to stay on top of what's happening and what changes are occurring in the landscape, but also to ensure that others throughout the organization are incentivizing their teams and their workers to use these tools as effectively as they can. And to reward the folks who are going the extra mile, who are adopting tools quickly, who are come up with new innovative uses and sharing them with other people in the organization. Think all of that is going to be important. And then last but not least, communicate the organizational role of generative AI. Help people understand what you're trying to accomplish as an organization in using these tools. Are you going to provide access to the tools that they need are going to provide the training and the resources and, and commit to augmenting the workers that you have as best you can, while also preparing people for the reality that some folks may be, you know, have their roles changed or replaced by these tools moving forward And to be transparent about that reality and help people begin to prepare for what that might entail. So those are just a few of the recommendations that we have coming from this report. There's obviously others within the report itself. We can quickly just show you that the insights that we highlighted in the report and then take time to answer any questions you might have. I think the first and foremost about the point that we made around leaders, the role that you all play and that leaders play in modeling usage, establishing governance, encouraging adoption, providing training, and then ultimately deciding whether the organization prioritizes augmentation or replacing workers. Then also again, the need for effective governance and adoption and to understand that these are intertwined challenges related to how do we use these tools well and the importance of of pushing that forward and sharing that vision across the organization. And then looking at the future factors that are going to shape how generative AI is used within HR and beyond. I think it's important to recognize the the, the current gap that is only going to grow between AI migrants. They are natives, people who are coming in with the cultural assumptions and expectations of a pregenerative AI workplace versus those who are going to be entering the workforce in coming years who just expect these tools and have are used to them as a way of working. And the need to adapt to these new capabilities rather than trying to keep on with the same old and, and use these tools in small ways around the edges. I think it's also important to recognize that workers and leaders are going to have different expectations moving forward stemming from the capabilities that these tools offer. But leaders are going to have to ensure that their expectations related to productivity, related to the level of innovation that is going to. They need to make sure that that's realistic while also doing the best that they can to meet employees expectations around what the work experience looks like, how quickly problems can be responded to or solutions can be developed, things of that nature and the employee experience as a whole and how these tools are affecting that experience. Then last but not least, this question around job displacement and the need to just be upfront with employees about the risks and realities of that situation. But also providing the training and the upscaling, the rescaling that they need to be able to use these tools effectively, yes, but also to take on new tasks as maybe some of their old ones are offloaded to machines. And I think they're, the point that we would make is just to encourage folks to look at all the things that can be done now that couldn't be done before. And how we can use the people that we have in our organizations in order to use or unlock that new potential rather than just seeing it as a chance to reduce headcount, which I think will maybe produce short term results, but long term difficulties. So that said, I want to look at the Q&A. We have a question from first question is from Larry around where does the possibility or need for changes in relationships across workers at particular levels in the organization and relations across levels. So none 100% sure if this is going to answer that question there, but I'll do my best. I think one of the questions that stood out to us and I was talking to Chr about this early last year was how does AI change kind of the silos within the HR function or just across the organization? It's a technology that works best when it can cut across those artificial boundaries that have existed for various reasons related to the distribution of resources and information in the other 1950s or whatever it may be. And those walls have continued. But I think it's a lot of ways AI cuts across a lot of those boundaries. And it's most effective when you can, for instance, integrate data across different functions, different business units. And so there's definitely going to be a a reordering or re recreation of how those relationships are put into place and who you work with on the day-to-day in order to account for some of the, the new technological capabilities that come to the forefront. But I think it's also, again, we've been talking about it for years. We'll see if it actually happens. But I think a lot of these tools, for instance, change access to information in two different ways. So one is people at the front line can get access to a lot more data that they won't otherwise have and a lot more context that they won't otherwise have. But I think it's also something where leaders are going to have access to data much more quickly than in the past where it may have been, you know, you go to your team of analysts and say I have XYZ question and get back to me in a week or whatever. Maybe now it's going to be at least the opportunity potentially for the leader just go to a tool, it's themselves and then ask their question and get it answer very quickly. And again, going back to the need for leaders to use these tools and be familiar with their strengths and weaknesses, I think one of the questions that stands out is our leaders going to understand that just because the answer is being presented doesn't actually mean that it is correct. And some of the skills that have been built up to determine when humans are doing sloppy work don't necessarily apply to the machine. So I'll give an example from one of our interviews. One of the HR leaders we talked with highlighted that he was most concerned that these tools change some of the warning signals that kind of give him an understanding of whether someone has actually done the work that they need to in order to make an argument effective. So, you know, whether it be a presentation that before may have been sloppy because a person was putting it together the last minute. With generative AI, you can make them more polished. But also even in things like writing an e-mail or memo, the argument may in the past have just not made sense and was very quickly apparent, oh, this is nonsensical. This person was, you know, slacking or it's just misunderstood my question. But because generative AI can at least produce reasonable sounding content, it's a lot harder and you might have to spend more time analyzing the output to be like, wait a second, that doesn't make sense. So those are just a few of the ways I think access these tools will will change some of those relationships and the way people work with each other and. Oh, Matt, if I may really quickly. Yeah, I think I think we also sort of touched on this a little bit earlier when we were talking about, we, we had sort of talked about like evaluations and how you can't necessarily compare people who are sort of well versed in using this technology and people who aren't. And so if your organization, let's say, is, is doing these evaluations and we're comparing them all at the base level, it may sort of lead to this sense of competitiveness, again, like between those AI migrants and AI natives. And, and that's sort of not a direction that you'd want to move in. You sort of want to encourage collaborations that they sort of feel like they can share with each other as opposed to I need to keep this knowledge to myself so that I get an edge and like can shine and, and, and be sort of the star. Yeah, that's a good point, I think. And you want to again be able to merge those groups into a unified workforce. And so whether that be you have to create a culture where AI natives feel stimulated and feel like they can thrive, but AI migrants also feel like they're supported and they are being, you know, given the help that they need in order to become more comfortable with these tools. And I think it's it can lead to pockets and, you know, different silos or different people with different attitudes across the organization. But ideally you want to try to, to make that as seamlessly integrated as possible. And then, yeah, ensure that AI natives are sharing what they're learning with AI migrants and AI migrants are bringing their perspectives and their value from, you know, other things that aren't related to capability using these tools in order to to work with AI natives effectively. So I think that the challenge again, is going to be when the people coming into the company just might have different expectations about what work looks like versus people who are used to an older way of doing things and just have different views of what work is supposed to be. And, and so how you're supposed to do it. See the question here from Joy Marsh around the organizations that prioritize sustainability, balancing the use of AI with its impact on the environment. So I have a couple thoughts here. I think it was both Google and Microsoft. I think I've walked back their sustainability commitments because of, of increased energy usage, specifically related to AI. But obviously there's a lot of other drains on the environment that come with, you know, building new data centers, developing the chips that are required to, to power these models, all those sorts of things. I think 1 area that maybe gets overblown a bit is water usage. It's unclear just how much water is being used, you know, to cool these data centers and how that's being affected because a lot of it is recycled within the data center, but some of it also is not. And so that's one area where it's a little bit more tricky. But certainly the energy requirements are going to be astronomical. I think that's one of the main limitations on compute moving forward. And I think here you see a lot of tech companies at least looking at new clean sources of energy, specifically nuclear, and also, you know, increased emphasis on solar, geothermal, consistent energy sources that aren't going to require burning fossil fuels. But I think it's something that companies cannot do alone. It's going to require consistent effort from from government, especially with when it comes to, you know, strengthening national infrastructure, when it comes to the energy systems, but also regulations around what can or cannot be built, how easy it is to develop solar fields or to, you know, create a new nuclear plant, power plant. That's something that companies cannot just decide on their own. So I think it's it's a it's a public, private dance. And I think companies, companies are, are, I guess, you know, trying to have it, have their cake and eat it too, in some ways where they make these sustainability commitments. But there's also clear indications that they're walking that back, at least for now, in terms of what they're prioritizing. And so it's on us as people, whether consumers or leaders within these companies or, you know, people who work for these companies to, to hold them accountable as best we can and to to make sure that they're honoring those sustainability commitments and, and working for a better future for all, not just a better future in terms of the number of data centers that are available. But it's obviously a complex question, brings in a lot of different groups with a lot of different views. So we're almost at the end of time. So thank you all for those questions. Hope you've enjoyed our conversation today. This is not the only way to get access to our wonderful content here at the Conference Board. We also have upcoming webcast about a variety of different topics such as cracking the code for talent retention or looking at some of the out comes from that C-Suite Outlook survey that we that I was just talking about a little bit ago. So I encourage you to follow those upcoming web casts, do your best to attend if you can. They're always interesting, cover a wide range of topics and always learn a lot of good information from them. They also have upcoming live events, which I think are even more enjoyable just because you can go even deeper. So, you know, usually a day or day and a half really diving and diving into a topic. So we have some coming up around the, you know, healthcare industry specific ones or around the people 1st and just what does it mean to put people 1st and, and preparing for the future in terms of how we equip our people, our talent, our leaders. So encourage you to check those out as well. And if you are more of an audio person, then we also have our CC Perspectives podcast where our CEO Steve Odland and has a variety of interesting guests, again to cover a wide variety of topics. Pretty short episodes. I think they're about 30 minutes each and just really informative people at the top of their game talking about topics that they know really well. So encourage you to follow that podcast channel and then check out the content coming through there. And then last but not least, if you are interested in having these conversations with your peers and learning what your peers are doing to solve some of these challenges, whether it be related to generative AI or other challenges that you're facing, I highly encourage you all to check out our councils. It's a great chance to connect with peers consistently, and it builds the sense of community to have these conversations in an open and frank way that just isn't necessarily possible at more public forums or people you don't know as well. So it really is valuable and a chance to hear from your top peers at leading companies about what they're doing to solve some of these challenges. So highly recommend checking those out. Thank you again for taking the time to talk with us and to share your questions with us, and we hope you enjoyed it. _1736933785942