My Boss, the Robot?
What it will take to AI-proof your management job.
Posted January 18, 2023 | Reviewed by Tyler Woods
- Recent leaps in AI capabilities raise pressing questions about its role in the workplace.
- AI can prove indispensable as a team member.
- Future AI models could exhibit most behaviors required of an ideal leader.
- Human managers need to up their leadership game.
What was your reaction when you first came across ChatGPT? Enthusiasm, terror, or—most humanly—both?
Even before this biggest and baddest model of artificial intelligence stepped onto the world’s stage, our attitudes towards AI were caught in a mind/gut crossfire: most of us were explicitly positive about it, while deep inside we cowered down in fear (Fietta et al., 2022).
Now ChatGPT takes those emotions on a whole new ride. Its “hard skills” are already intimidating, from writing software to formulating legal wording. But it’s the “soft skills” that punch us in the stomach. Traditional strongholds of human comparative advantage such as storytelling, humor, and poetry, are crumbling. As of last year, researchers were confident that the leadership skills of storytelling and motivational discourse could not be replaced or even augmented by AI (Giraud et al. 2022).
I presented ChatGPT with a hypothetical scenario where I needed to bring bad news to my team. ChatGPT gave me a list of recommendations on how to strike the right, authentic tone, and a motivational speech to conclude with. As for storytelling, it can weave a tale on any topic, and in the style of the Iliad, to boot. Need to sprinkle some humor on top? Here is ChatGPT's response to a request for a joke on what managers and the Iliad have in common: “They both tell epic tales of battles and heroes, but in the end it’s just a bunch of paperwork.” Even though the original Iliad was not written down, that’s an exasperatingly valid joke.
For now, companies use AI only as a (very smart) tool.
Management research envisions three main roles as a basis for new AI-related jobs: trainer (providing the relevant data), explainer (describing to decision-makers what the AI is doing), and sustainer (making sure it operates properly, for example by purging it of biases) (Sollosy & McInerney, 2022). This evokes the image of a 1950s floor-to-ceiling computer busily poked and prodded by a host of experts in white coats: powerful, but a tool nevertheless.
But AI can become a colleague.
Yet, it is not difficult to envisage ChatGPT’s future versions being promoted to full colleague status. Imagine Ava, an AI robot, as a member of a team. She participates in all team meetings and has access to all online interactions between subsets of the team. Soon, Ava becomes indispensable because she stores all the undocumented knowledge created by the team and can synthesize it as needed (“Dan and Michael toyed with a similar idea last week, so we could put everything together…”). And that's even before she starts sharing information with Adam, the robot from another team, thus helping avoid duplicating work and optimizing collaboration. By learning from every team interaction, Ava also becomes a team dynamics expert. For example, when entering today’s meeting, she is aware of “under the table” context (Michael is irked by Linda’s unceremonious dismissal of his idea last week), so if Michael’s resentment bubbles up today, Ava will recognize its source and mediate the resulting conflict wisely. If this scenario sounds remote, take a look at the video in this research experiment: not only does a robot’s intervention in mediating conflict positively affect team members’ behavior, but they also tend to ascribe more wisdom to it than it actually has.
Back to Ava: if she's invested in some one-to-one time with every colleague she can also propose solutions based on their individual motivations, using, for example, Cohen and Bradford’s (1989) “influence exchange” theory (isn’t it ironic that studies on AI can become outdated in months, but Ava will be able to use decades-old research on humans, as our psychology is essentially unchanged). So Ava would know that “John loves being recognized as an expert, while Dan appreciates work that reaffirms his openly stated values,” and would attempt to offer John and Dan what they value most in exchange for optimal team performance. Wait…isn’t this starting to sound like management?
Could AI be our boss?
A myriad of issues should be addressed across wide fields, from ethics to our species’ very survival, before we could fully answer this question. But let’s imagine for a moment that humanity has found a solution that sets to rest all our worries about an AI-controlled, dystopian future. How would we feel about being led by Ava?
Emotions: a dealbreaker?
The generally shared view is that AI will not experience emotion any time soon. Therefore, our first reaction might be “I cannot work for an emotionless manager” (and some of us might even be speaking from experience). On closer look though, that implies that Ava would have no inflated ego, no inclinations to favoritism, no narcissistic, abusive, or insecure behaviors. As a manager, I once worked with my team for months preparing a strategy for a new product with enormous potential in our specific geography. Then it was angrily swept off the table by the top decision maker in the organization because he had had a really bad experience with a similar product in a different market. With the thud of a coffin lid, the case was closed. Ava would bring no emotional baggage when making such decisions.
Emotional intelligence, a must-have of leadership, includes the ability to recognize and manage one’s own emotions. But that is mostly to prevent our feelings from running amok and sabotaging our interaction with others. With no personal emotions to reign in, Ava could fully focus on recognizing and appropriately responding to the feelings of others. And if she initially misfired with some blunt or hurtful reaction, we would probably forgive her and help her learn, safe in the knowledge that “she didn’t mean anything by that.”
Career Essential Reads
What is good leadership and will AI get there?
The most researched and battle-tested model of good leadership is so-called “transformational leadership,” a combination of four leader behaviors (Piccolo & Colquitt, 2006). These are “inspirational motivation,” “individualized consideration,” “intellectual stimulation,” and “idealized influence” (yes Ava—in case you are reading this—academics love tweaking the names of concepts until they all start neatly with the same letter).
Inspirational motivation is about making people proud to be part of a team that has a motivating purpose. For middle managers, this often entails making sense of how the team’s goals fit with those of the organization and vividly painting the bigger picture, conveying meaning and purpose. This should be achievable for Ava, who not only has been trained on the best motivational speeches in history, but also has real-time information on organization-wide decision-making.
Individualized consideration has to do with giving dedicated attention to each team member, to their needs and their development. Human leaders often fall short of this requirement not because they are horrible bosses, but because they are overwhelmed with work and distracted. Ava would remember that Dan’s youngest daughter has a ballet performance this coming Friday and would make sure to ask how it went the following Monday. With instant access to all of Dan’s talent development files, she would never miss an opportunity to offer him a relevant learning experience, all the while showing high emotional intelligence.
This also connects to the third transformational leadership behavior, “intellectual stimulation,” challenging people to think in new ways—which sounds like something right up Ava’s street. When Dan is struggling with a particular challenge, Ava could gently suggest new, different ways of thinking about it, drawing, for example, on all the business school case studies describing similar situations.
The stumbling block: idealized influence.
The fourth transformational leadership behavior, “idealized influence,” is a tougher nut to crack for Ava. Leaders showing idealized influence make followers admire them and adopt their values by “walking the talk.” To stay true to their values, they are ready to take risks, such as being excluded, humiliated, or taken advantage of. None of this would have any meaning to Ava. Similarly, she could not hold herself ultimately accountable for the team’s performance. Great leaders have long sworn by a simple rule: “attribute all successes to my team, take all failures upon myself.” It would cost Ava nothing to do that, making her gesture worthless to the team.
Finally, idealized influence in leadership requires passion. If you are lucky enough to have ever had a leader whom you deeply admired and would have gone into battle with, it is probable that she or he was a passionate individual. Passion involves a touch of “madness,” persisting when all odds are against you, believing against all logic. Ava is too good at logic and at calculating odds to be passionate.
So AI will not quite be a transformational leader. But how many managers are?
If you are among the 60% of all employees worldwide who are not engaged at work, your manager is very likely not an inspiring leader. Would you be better off with Ava instead of your mediocre (if not downright horrible) boss?
With the new wave of AI, we can already sense that artificial intelligence will be replacing those who deliver lackluster performance in many professions: content writers, programmers, graphic designers, to name a few. The management profession could be next, and bosses who do not give much thought today to their leadership should up their game.
Yet, like truly talented artists, transformational leaders will not meet their cybernetic match anytime soon. The good news is that transformational leadership can be developed, and AI can help by taking many time-consuming tasks off managers’ hands. Here is a chance to take our—very human—leadership to another level.
Cohen, A. R., & Bradford, D. L. (1989). Influence without authority: The use of alliances, reciprocity, and exchange to accomplish work. Organizational Dynamics, 17(3), 5–17. https://doi.org/10.1016/0090-2616(89)90033-8
Giraud, L., Zaher, A., Hernandez, S., & Akram, A. (2022). The impacts of artificial intelligence on managerial skills. Journal of Decision Systems. https://doi.org/10.1080/12460125.2022.2069537
Fietta, V., Zecchinato, F., Di Stasi, B., Polato, M., & Monaro, M. (2022). Dissociation Between Users' Explicit and Implicit Attitudes Toward Artificial Intelligence: An Experimental Study. Ieee Transactions on Human-Machine Systems, 52(3), 481-489. http://dx.doi.org/10.1109/THMS.2021.3125280
Piccolo, R. F., & Colquitt, J. A. (2006). Transformational leadership and job behaviors: The mediating role of core job characteristics. Academy of Management Journal, 49(2), 327-340. https://psycnet.apa.org/doi/10.5465/AMJ.2006.20786079
Sollosy, M., & McInerney, M. (2022). Artificial intelligence and business education: What should be taught. International Journal of Management Education, 20(3). https://doi.org/10.1016/j.ijme.2022.100720