Transformative Technologies, Disruption and the Shape of Things to Come, 2040’s Ideas and Innovations Newsletter, Issue 79

Kevin Novak
5 min readOct 27, 2022


Issue 79: Oct 27, 2022

As we contemplate the end of another year, now is a good time to revisit the opportunities and shortfalls of technology. And to clear up some popular theories that may be losing ground. As we have said many times, technology is no panacea or silver bullet, it is simply a tool to augment our intelligence, make our solutions smarter and help us measure what matters in a way that matters. Remember, machines and applications have no goals and no agendas. They have no values or ethics. They are task-based operating on the programming we input and the function we have established. The bigger they get, the more hidden are their failures. There is a fundamental distinction between thinking and knowing. And most critically, they are programmed and coded by human beings with all their strengths and inherent biases and therefore, are and can be as faulty and error prone as we are.

That said, there are some new technology threads in the popular conversation that have caught the imagination and attention of forward-thinking leaders and their organizations. We’ve curated three big ideas that we think are worth paying attention to with a few predictions of our own of what these technologies can do for us.


This open-source nonprofit collective has been working on developing natural language systems that have relevance to our organizations and our social institutions. OpenAI is self-described as having a mission to ensure that “artificial general intelligence (AGI) are highly autonomous systems that outperform humans at most economically valuable work that benefits all of humanity.” Okay, that’s ambitious and optimistic. It adds, supporting the altruistic open-source credo, “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

Here’s the kicker: “Our first-of-its-kind API can be applied to any language task and serves millions of production requests each day.” How do they do this? We are sharing two of their technologies that we believe are going to change our lives, professionally and personally.


GPT-3 (Generative Pre-trained Transformer 3) is technically an “autoregressive language model that uses deep learning to produce human-like text. Given an initial prompt, it will continue to produce text that matches the prompt.”(Wiki) Another description: It performs a wide variety of natural language tasks, which translate natural language to code. That’s a mouthful for non-tech heads, but its implication for organizations is profound.

A simple description of the way it works from TechTarget is “a neural network machine learning model trained using data to generate any type of text. It requires a small amount of input text to generate large volumes of relevant and sophisticated machine-generated text. GPT-3’s deep learning neural network is a model with over 175 billion machine learning parameters. To put things into scale, the largest trained language model before GPT-3 was Microsoft’s Turing NLG model, which had 10 billion parameters. GPT-3 is better than any prior model for producing text that is convincing enough to seem like a human could have written it.”

And that is the lynchpin to our fascination with GPT-3. We will be able to generate marketing messaging, content, advertising, blogs, chatbots, customer service systems, voice commerce, voice assistants (Siri and Alexa) — just to name a few — that will evoke the same syntax, cadence, language, and empathy as a human being. TechTarget adds, “GPT-3 has been used to create articles, poetry, stories, news reports and dialogue using just a small amount of input text that can be used to produce large amounts of quality copy. It is also being used for automated conversational tasks, responding to any text that a person types into the computer with a new piece of text appropriate to the context. GPT-3 can create anything with a text structure, and not just human language text. It can also automatically generate text summarizations and even programming code.”

That is either something to celebrate or be terrified by. When do we know we are talking to a human? Reading an article written by a human? How can we have confidence in the answers to our questions? Can GPT-3 really understand and respond to the nuances of our words and respond appropriately?

It surely is a powerful tool that every organization may be using in the future to leverage its augmented intelligence to produce services, systems, and products. The need for urgency will become strong and loud from the innovators, both on staff and as consultants. They will advocate this technology as a solution to many problems, to create new efficiencies, a tool to speed customer response, and fill task gaps in a shrinking workforce. Anticipating this augmented intelligence system, what new knowledge and skills will an organization need to ensure its effectiveness in terms of its business model? Will the promise become reality?

Future iterations will only get better and more powerful, and we will surely become highly immersed and deeply dependent on these technologies. But, let’s offer a reality check. GPT-3 is programmed by human beings, and we know that information generated by humans is often a victim of conscious and unconscious bias. GPT-3 “suffers from a wide range of machine learning bias. Since the model was trained on internet text, it exhibits many of the biases that humans exhibit in their online text. For example, two researchers at the Middlebury Institute of International Studies found that GPT-3 is particularly adept at generating radical text such as discourses that imitate conspiracy theorists and white supremacists. This presents an opportunity for radical groups to automate their hate speech. In addition, the quality of the generated text is high enough that people have started to get a bit worried about its use, concerned that GPT-3 will be used to create fake news articles,” reports TechTarget.

In our book, The Truth About Transformation, we stress the importance of healthy skepticism and critical thinking to unpack bias in all aspects of organizational systems, strategies, products, and services. Taking a momentary pause to consider and evaluate what we are experiencing, reading, viewing, or consuming isn’t typical. By nature, we are trusting, wanting to believe what is put in front of us and do not expend the energy to question. GPT-3 with all its ability to create wonder and awe also has the ability to perpetuate the practice of misinformation, communicate half-truths or present solutions, calculations or findings that may not account for all necessary factors and variables if it is not managed. We as humans have the responsibility to ensure that GPT-3 and its next generation iterations are used, as OpenAI intends, to benefit humanity, be leveraged as a tool by humanity and not undermine its foundations and beliefs. That promise in reality is surely going to be much more challenging than it sounds.

Read the Full Article>

Explore the Truth about Transformation

The Truth about Transformation is a playbook to help you navigate all the landmines hidden in a turbulent marketplace. Use it with your teams to deconstruct your business model and principles, and to anticipate the future, not catch up to it. And call us to work with you to dive into what’s working and not, and help you stay on track to be relevant, valuable, and viable for your stakeholders, customers, partners, and workforce alike.

Get the Book>



Kevin Novak

4X webby winner, CEO and Chief Strategy Officer @2040 Digital (, IADAS Member, Speaker, Author, Science Nut