Is AI acculturation training a good idea for executives?
Why acculturation training is misguiding Executives about generative AI
Personal Experience with acculturation training
Last fall, I had the opportunity to develop and deliver an acculturation training on Generative AI for a large company in France.
The participants gave me very good feedback. However, the organizers of the training were not fully satisfied. They were expecting a more "hands-on" training that included prompt engineering workshops and direct applications of GPT for the executives in their day-to-day roles.
It didn’t feel 100% right at the time but I didn’t really know why. So I paused my acculturation training and moved on with consulting and product innovation.
I confess I am not a big fan of prompt engineering. I even think the need for prompt engineering is transient and should disappear as models get better and better but this is a different story.
The training I designed was more of an introduction to AI and Machine learning, with the goal to make the executives understand how machine learning was different from traditional software, how models were built, how data were selected, prepared and when it comes to generative AI, looking at where it makes sense and where it was a very risky technology to use. I also stressed on hallucinations and bias.
It was clearly not a hands on guide on how to use ChatGPT and improve prompting ability !
Executives using GPT
Through my consulting work, I meet executives who are interested in integrating AI, particularly generative AI, into their businesses.
The discussion often begins with me asking about their previous exposure to this technology and a common response they provide is that they are using GPT once every so often and have participated in an "acculturation training." These training usually revolve around using GPT and advanced prompting.
I can tell, after discussing with these executives - and I have talked easily to 80-100 director level and above last year- that these “hands-on” trainings are not really helpful and can even be counterproductive.
Here are the reasons why:
The Pitfalls of "Hands-On" GPT Training
The first issue I see is that these training usually don’t go back to the core of machine learning which is probabilistic by nature and therefore only correct to a certain extent. The consequence is that a generative AI can’t really be used in a process that is mission critical or requires predictable and 100% trustable results.
Its applicability is also very different for people who hold creative jobs ,where an hallucination is an opportunity, versus people who hold a production job ,where an hallucination is a failure.
Here is a great video from Allie Miller (45mn) summarizing the difference between using GPT as a creative or as a productive job.
https://maven.com/p/cc6f1b/how-to-use-ai-to-10x-your-productivity
The second issue is the overconfidence in what an LLM can do.
Executives usually have use the LLM for trivial or content creation use cases. In the real world, a realistic solution often combines different techniques/models to achieve accurate predictions.
As an example, I just reviewed a platform that includes: generative graph and text transformers to handle complex data and fill in missing information, XGBoost decision trees to predict outcomes and combine intermediate predictions, Named Entity Recognition (NER) to extract important entities from text, Bayesian optimization to fine-tune model settings, Ensemble learning to improve the prediction performance, and Shapley Additive Explanations (SHAP) to explain how each feature impacts predictions, making the model more transparent and trustworthy.
This to produce a predictive model that is best of breed (~80% accuracy) but is still not usable if the outcome is mission critical.
The third issue pertains to the understanding of semantics and the way LLMs models “semantics”. LLMs generate, summarize, and categorize using “positional semantics,” which relies on the principle that sequences of words frequently seen in the same contexts are similar.
This approach is the reason why early models struggled with math: in the realm of positional semantics, the vector representation of “seven” is very close to that of “nine”, so in an LLM’s eye, numbers are all mixed up.
This specific issue has been mitigated with external tools- analytics packages -, better training data, and improved architectures. However, similar challenges persist in many domains and are not easily resolved. For instance, in oncology, the vector representation of one biomarker is often very close to the one of another biomarker. Unless the model is fine-tuned with domain-specific data or unless it is provided with precisely defined context and ontologies, LLMs will mix up things and generate incorrect answers.
The fourth issue involves broader concerns that must be addressed to ensure responsible and effective deployment of generative AI. It’s all the domain of Ethical AI.
Generative AI models perpetuate or amplify (historical) biases present in their training data , necessitating robust bias detection and mitigation strategies, including generation of synthetic data. It may not seem critical but if, as an example, marketing material keeps assigning people from minorities in similar roles, this could be very damageable for the brand mid term.
The use of generative AI also often involves processing large amounts of data, raising significant data privacy and security concerns. Emphasizing data anonymization, secure data handling practices, and compliance with data protection regulations like GDPR/CCPA is crucial. Training programs should therefore touch base on regulations and best practices for maintaining compliance.
Last but not least, these technologies are often presented as a new “software”.
When presented with Gen AI, people “think”: how can I improve my software, my process using these capabilities? This, in my opinion, limits the imagination and hides what is really possible.
I prefer looking at it with the analogy of “let’s say you can hire a junior MBA with an infinite amount of time” that can look at your data, problems and extend it with anything available on the internet. What would they do?
When you look at it this way, many new use cases come to light: in BI, you can read the web real time and create metadata about .. anything. In software, you can radically change the UX, and even disintermediate most of application software. You don’t have to use the application anymore, the AI assistant is doing it for you.
So I prefer the junior MBA lens. I even wonder whether Gen AI should be managed by … HR and not IT. Maybe not in 2024, but I bet this will be a key question in the years to come.
Moving Forward with AI Training
Given these insights, it's clear that AI training needs a balanced approach. While practical, hands-on experience is valuable, it is clearly not enough and can’t equip executives to make decisions for generative AI investments.
Executives need a strong foundational understanding of AI principles and limitations.
Executives must learn to discern where AI can add value , where traditional methods might still be more effective and where it’s a big no-go.
If you're looking for comprehensive training that goes beyond hands-on experience and equips you to make strategic decisions with AI, consider reaching out to our team.