Will AI Replace You? 2040’s Ideas and Innovations Newsletter, Issue 185

Kevin Novak
9 min readNov 7, 2024

--

Issue 185, November 7, 2024

One of the most popular debates of 2024 is what AI won’t replace. Is it me? Is it you? Is it all of us? Is it none of us?

Surely you can think back to movies that projected what a possible future might look like as technology becomes more immersed in all aspects of our lives. Science fiction often isn’t fiction, but rather prediction. In real life, some of us will place visors over our eyes and live life virtually while technology makes all the decisions for us in alt-worlds. The ethicists debate whether technology will have the agency to determine what is right or wrong without human intervention. The optimists hope that technological prowess will become a source of energy and transform our value and purpose. The pragmatists will support tech to make life easier, remove boring and repetitive tasks and give us more time for a rich life instead of our all-consuming necessity to work. And the Alphas are already saying life is complicated and complex, so let’s try to make it easier.

Technology in Transition

We are at a hinge point in human history. With a world population of 8 billion, we need to collectively advance our society by understanding an increasingly complex trove of cultural, social, economic and business dynamics. One of the most important goals for our technological evolution is to improve our information networks. We are most successful when we come together, recognizing that no one can know everything. There is wisdom and learning in a crowd that helps us progress faster and more productively. Technology helps us in those regards.

There are many things we simply are not efficient at: crunching numbers, analyzing large sets of data, and finding commonalities or disparities across large sets of information. These skills are not based on information networks, they are mechanical, repetitive tasks. Enter artificial intelligence, possibly the most significant factor in determining our future.

AI and Risk

In our pursuit of progress, we often focus on positive possible outcomes. At 2040, we advise our clients to see the consequences of actions and endeavors. Take AI, for example. There are contingents in society, either through their own analysis or out of simple fear and anxiety that are ringing the warning bells about our creation of AI. They are skeptics who remind us that we may not see the long view and as a result, there will be significant consequences that may compromise humanity’s independence and control. There are other contingents who continuously promote the benefits of AI. The skeptic in us believes that many of these advocates have a stake in the game as investors or influencers.

The objectivity of AI comes with its own challenges and risks. When we program or prompt AI to perform a task or set of tasks, we have construed it to be somewhat “human” in our interaction. Think of asking Alexa to tell us a joke or story. But what we miss in the exchange is that it’s programmed to do what we ask regardless of consequences, with no conscience, and without any subjective nuances. Only the human mind can recognize and consider ethics.

AI-based algorithms are engineered to increase our engagement with bots and feeds. TikTok is a great example of giving us more of what we are likely to react to and immerse in. And then it’s on a roll, systematically giving us more and more without understanding that engagement may be at a cost of mental health, increased anxiety, shaming, ridicule and, of course, misinformation. The flip side of AI connecting a positive, shared community is that it can deliver content that increases divisions, stokes anger and causes suicides.

Replacement AI

So, the question is whether AI is the solution to all the world’s problems or the bane of our existence. Can AI replace us?

· Is AI more precise? Anecdotally, it reads many radiology images better than trained radiologists by seeing things hidden from the human eye.

· Is it more loving? Hollywood is on top of this one: Check out “Her” if you haven’t already seen it. And there are more frequent reports of lonely men in love with their AI companions.

· Is it more friendly? We know people who cook with Alexa bantering their way through the process of making dinner.

· Is it more empathetic? Tell that to the parents of a 14-year-old ninth grader from Florida whose son fell for a chatbot and committed suicide so he could be with her.

· Is it more competent? It’s been auto-piloting jets for decades.

· Is it more dangerous? The bully algorithm on social media is a threat to the mental health of people of all ages.

· Is it more efficient? It is widely accepted that it can perform repetitive, information processing, and analytical work faster and more accurately than humans.

· Is it better at “taking orders?” AI will do exactly what humans tell it to do without pushback, gripes or citing union regulations.

What AI Can’t Replace

In context of how we work, here’s what AI can’t do better: Business wisdom can’t be automated.

We have always been advocates of using AI in its various forms as a tool to help organizations achieve higher levels of performance. We have cautioned the need for AI guardrails, structure and recognition of personal agency now and our future.

Yet, we have to agree that AI has made so many tasks formerly managed by Excel obsolete. It can crunch numbers, spit out analytics, and help manage workflow. But it can’t replace pattern recognition and strategic intuition.

Why does that matter?

Some of us are highly adept at spotting market trends before data confirms them. These individuals have keen observational skills to sense shifting customer needs before they’re articulated or when there is not enough data to see the trends. They also understand the emotional drivers of purchasing decisions which influence how to recognize market opportunities that don’t show up in data.

After all, AI is based on predicting the future based on past data. It is the sum of its parts and can only assess, analyze and respond using the training data it has been given. This is a topic we wrote about in The Truth About Transformation. It cannot recognize the subtle competitive threats based on industry dynamics that are observable to individuals with foresight. AI cannot replicate systems thinking that see the larger picture holistically. For example, understanding cyclical business patterns may not be evident in short-term or historical data. There is still a gut instinct when it comes to pattern recognition. Not too many fashion designers would fall back on AI trending analysis to create a new seasonal collection. And it is still up to the critics to decide if computer-generated art is legitimate art. The fact is that individuals who have a keen sense of human behavior transcend the algorithms in identifying emerging trends.

Negotiation

AI is an indispensable tool for reading contracts and identifying any discrepancies or potential financial loopholes. Use cases abound where law firms use AI to help review legal documents and correlate them to laws and regulations. But when it comes to the actual negotiation between two parties, AI is less reliable in reading between the lines in complex deals and recognizing hidden value and risks.

An experienced and wise human negotiator understands the power of leverage and how it can be used to achieve consensus. It’s likely there will never be training data that can enable AI to recognize internal motivations in negotiating with other humans. Those motivations, whether positive, sheer manipulation or emulating power positions are often unspoken with subtle visual cues. Perhaps there will be a day when Elon Musk or other companies produce implants to read minds.

AI also has no sense of timing; it is still the aggregation of historical data that is not keyed into human dynamics to leverage timing and patience as advantages. Humans can be patient when seeking an outcome, at times immediately taking action to the goal isn’t advantageous. Patience is of course a virtue we seem to understand and embrace. AI is not patient — it responds in seconds.

Reading the Room

Many new startups have developed meeting note systems that are focused on reading non-verbal facial and body cues among participants to reveal unstated emotions. This can be creepy for some, but useful for managers who need to fall back on non-intuitive devices to guide them. We would argue that those managers are not in the right job.

Conversely, humans, not machines, are better at reading unspoken organizational politics within an organization. Reading these behaviors can help understand and manage team dynamics and hidden agendas. An organization, including its workforce and customer base, is not objective. Each individual comes with his or her own motivations, goals and power positioning to play ball, derail, rise above, seek vengeance and more. AI is not a great tool to analyze behavior in real time to reveal organizational politics and individual motivations. A video recording of a meeting tagged with data points may reveal behavioral motivation. But it’s not likely that AI can accurately predict what is happening below the surface.

People, not machines build trust across diverse stakeholder groups and can navigate complex partnerships. Prescient, empathetic humans can spot an organizational palace coup before the data rolls in. And people with a genuine interest in others can build relationships and personal trust. The power of experiential wisdom cannot be underestimated.

De-Risking Crisis

Anyone who has had to manage an organizational crisis knows this is art, not science. Responses to persistent reporters’ requests are made on the fly, based on understanding unfolding events, not waiting for the final data-driven conclusion. Experienced leaders can make decisions with incomplete information balancing short-term pressures with long-term sustainability. Dave Calhoun, former CEO at Boeing would be a bad example of this.

Wisdom provides the ability to distinguish between healthy and unhealthy risks. AI has no sense of when something seems to be too good to be true to determine if it really is. Nor can it assess the ripple effect of decisions. AI is literal, sequential and basically clunky, although it seems cloaked in brilliance when it spits out 15-second solutions.

Credible leaders, on the other hand, manage public perception during challenges. AI cannot face the media and their cameras to provide context with empathy and vulnerability. Human wisdom makes leaders relatable and trustworthy — the communication skills essential to leading through a crisis.

Cultural Intelligence

We challenge anyone to give us a case where machine learning systems can recognize and understand resistance to change below the surface. Inter-personal communication is not in AI’s toolbox to recognize when to push versus when to pull back. AI is largely one-on-one not one to many and still needs to be prompted. It cannot maintain morale during difficult situations and provide inspiration to lead a workforce forward. It is culturally mute.

The jury is still out, however, on how well AI mimics an authentic voice and context in promotional and creative writing. If you’ve ever spun promotional messaging from any of the large language models, you may have noticed that the results are factual, informational and soul-less. Siri and Alexa do not sound like your most trusted business consultant. They sound like the recitation of a directory without context and depth. The number of prompts and fine-tuning required to get close-to-human results is counterproductive to writing the communications yourself.

AI Integration

Having said all this, AI is an extremely useful tool to augment human intelligence (not wisdom). The AI an organization relies on is only as good as its composite of information. With the churn of a younger workforce that lacks long-term loyalty, AI organizational knowledge archives are critical in documenting and transferring legacy knowledge. It can also become a repository for capturing experienced leaders’ decision-making processes, sort of a library of business use cases. It can also develop training programs that fuse crucial business wisdom with building digital capabilities. But it is limited to serving as a reference library not an active agent in managing an organization.

Veteran leaders are essential in identifying which processes shouldn’t be automated by understanding when human input is more critical. Balance is key in implementing machine learning where automation enables humans to do their jobs. Our book, The Truth About Transformation, is a deep dive into the pitfalls of believing technology as a panacea for all business challenges. It is the human factor that always trips us up in organizations. If we blindly believe what AI churns out we are forfeiting our greatest skills: empathy and wisdom.

Please share your thoughts with us.

Explore this issue and all past issues on 2040’s Website or via our Substack Newsletter.

Get “The Truth about Transformation”

The 2040 construct to change and transformation. What’s the biggest reason organizations fail? They don’t honor, respect, and acknowledge the human factor. We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.

Now available in paperback.

Order your copy today and let us know what you think!

--

--

Kevin Novak
Kevin Novak

Written by Kevin Novak

4X webby winner, CEO and Chief Strategy Officer @2040 Digital (www.2040digital.com), IADAS Member, Speaker, Author, Science Nut

No responses yet