The world’s obsession with “AI” and “Machine Learning” has reached a fever pitch. “When will we have AGI (Artificial General AI)?” people ask – referring to the notion that there will soon be super-AI that can essentially do anything a human can (opposed to the specialized tasks for which it is used today).
And while we have written much about what AI can do for you, it is equally (or perhaps even more) important to understand that there are some things that AI absolutely cannot and should not do for you. Consider the below an opportunity for your focus and/or career growth, or a warning about your use of AI, or perhaps both. But whatever you do, do not outsource this type of work to an AI and expect good results; to do so may endanger your job, your company, or perhaps both!
Note – below we use “AI” (artificial intelligence) and “LLM” (large language model, such as ChatGPT, Perplexity, Claude, etc.) interchangeably.
AI cannot:
- Build The Right Culture or Fix Culture Issues
Until the day when all humans are replaced by robots, the core resources of any business are the employees. Employees – humans – have feelings; they require motivation (beyond just compensation) and to ascribe meaning to their work. They also operate based on social norms to determine how they work. A company’s culture is one of the fundamental attributes that will determine if it can operate effectively, retain talent, and generally be a good place to work, or not.
Other than generalized information about “company culture” in its training data, an LLM will not be able to lead meaningful culture development at any specific company, if for no other reason than simply that it cannot access all of the relevant inputs to do so. Making culture-related decisions requires observing and understanding people – often in non-verbal, behavioral ways. Practically speaking, an AI tool cannot do this holistically. It can analyze what is written in emails, documents, etc. and what is said in meetings and provide insights. But it cannot make strategic decisions about how company culture should be formed.
- Exhibit empathy
As an extension of #1, AI fundamentally cannot exhibit empathy, because it does not feel. It can exhibit pattern matching to provide generalized responses that attempt to mimic what someone with empathy might say, but it will not exhibit true empathy – listening deeply and adapting one’s behavior based on what a person is saying. Similarly, AI has no inherent morals or social code; it adheres only to probabilistic patterns based on its training data. These are not consistent across all humans and even all cultures.
- Solve underlying structural, strategic or systemic problems
In many cases, companies look to AI to solve problems before their root cause is truly understood. A company may ‘hire’ an AI tool to replace its knowledge base, for example, because ‘it is really hard to find what we want to know in the current knowledge base’. But fundamentally, the issue may not be the knowledge base; it may be that there is not adequate information being entered by employees in the first place (in other words, maybe the knowledge base doesn’t have enough knowledge!).
Similarly, if the training material for an LLM is flawed, it will then faithfully reproduce this incorrect information. The entry of incorrect data by employees, customers (or the internet) will cause incorrect results.
- Make Strategic or Ethical Decisions For You
AI is a fantastic tool for many tasks and processes, given the right inputs (model and prompt). It can vastly outpace a human at certain tasks and should be used for this. In this way, it can offer fantastic decision support – providing supplemental analysis that help a human to make an important decision. It does not, however, have the same vantage point in your company that you and/or the company executives do, both in the information available to it as well as the timeliness of the information available to it; ChatGPT’s training data cutoff, for example, is June of 2024 (as of the writing of this article). This means that any new product introductions, technologies, events or trends are not considered in the responses it will provide to you! Similarly, general LLMs should not be used to make ethical decisions for you - though it can certainly provide information about what is commonplace out in the world.
For these reasons, AI should not replace you or your executive team in making critical decisions about the company’s strategy, roadmap, personnel or purpose.
Conclusion
The points in this article may shift over time as AI develops – but we expect the fundamentals to hold true for quite a while. AI is a tool that should be used as tools are; to carry out a given function or set of functions with increased efficiency. Just as screwdrivers shouldn’t be used to hammer in nails (though they often were, much to the chagrin of my high school Shop teacher), it is important for technologists today to have a good grasp of what AI can cannot do.
Interested in reading more about AI? Click here for for previous BPMA articles on this topic.
About the Author:
Adam Shulman is a Product Manager with extensive experience in software/hardware systems and a passion for music and audio technology. He currently leads Product Management atBose Professional and has been a member of the BPMA since 2016.