Since the introduction and massive success of ChatGPT, an increasing number of companies have embarked on creating their own Corporate GPT, either fine tuning existing models or using retrieval augmented generation (RAG) technique to enrich foundation models with their proprietary content.
The rise of Corporate GPTs
The trend started with BloombergGPT (https://arxiv.org/abs/2303.17564), which built a large model incorporating financial Data in March 2023. Since then most companies have experimented with GPT technology (or alike) crafting their own version of MyCorporateGPT. A friend of mine even mentioned a PastaGPT developed by a food company !!
In my consultations with brands in the US and in France, it is usually one of the first requirements that comes up:
“we want our employees to use our own version of ChatGPT”.
Such projects are excellent for experimentation. They are also a good fit for specific use cases.
As an example, a corporation may want its public facing communication people to reflect the values, the wording and style that it has spent time to establish, document and enforce.
However, the shift towards universal adoption of "Corporate GPT" within organizations comes with its set of challenges:
Historical Data Bias: Relying on corporate archives to train these models can lead to outputs that mirror past business practices, essentially echoing the mindset of employees and management from a few years before.
Echo Chamber & Lack of Diversity: A model trained solely on company data may cement existing viewpoints, stifling innovation. This is particularly risky if the company overlooks emerging trends or critical technologies, as the model would perpetuate the "business as usual" approach over fresh, innovative solutions.
Impact on some Employees' Roles: For employees tasked with market research, technological scouting, or driving product and strategy innovations, relying on an internal model could hinder their ability to incorporate external insights and learnings.
Misalignment with Company Culture: The approach to knowledge sharing and access within Corporate GPTs must resonate with the company's cultural values. Providing too little information can lead to employees not using it. Exposing them to information that management prefers to remain confidential can bring all sort of trust questions & issues.
Before diving into the development of a Corporate GPT, companies should closely assess the needs and roles of their employees.
For instance, customer service representatives in call centers could benefit from a GPT integrated with internal product documentation and past interactions, enhancing their ability to provide better responses. Similarly, for employees engaged in technology scouting, their GPT should be a gateway to external publications, expert opinions, and technological discourse in their field.
One size does not fit all.
The reality is that different jobs will require different “assistants” and therefore different models and different datasets to personalize the model to a task, a role and a function.
This will create a nightmare for companies who still have a disorganized knowledge management system and poorly documented organization, processes and guidelines.
I would argue that each and every role in a company requires its own model. Sure there is a common body of knowledge that is beneficial for everyone, starting with org charts, employee communication … but to really benefit from a useful AI assistant, specific needs arise:
An HR professional would need to access company HR policies, labor laws and regulations, industry benefits, compensation and salary structures, competitive information on hiring, pay …
A procurement employee would need supplier management strategies, access to ERP, history of supplier evaluation along with import/export regulations, vendor contracts, purchase history ….
A marketer would need results from market research, ad performance data, industry specific trends and ideally ;-) insights from influencers and key opinion leaders in the domain ( a little promotion there: www.ecairn.com) …
A customer support representative would need to access product and services help and FAQs, call logs, access to CRM …
For each role, a blend of corporate knowledge, common to all employees and specific knowledge to the role or even to the task would be required.
This brings a strategic challenge relative to who has access to what information within a corporation.
Beyond initial pilots, deploying assistants within an enterprise will require a program, policies, an organisation and a technical infrastructure.
Addressing knowledge access
Determining who has access to what information within a corporation is strategic.
This question is a prerequisite for any Corporate GPT / Enterprise GPT initiative and the answer to this question has a profound impact on company culture.
Some companies may opt for a compartmentalized knowledge structure, some may be more open.
There may even be a trade off between the company and the employees. I remember deploying a knowledge management platform in a large corporation's sales department. We struggled to convince sales reps to share their prospect insights. Why would a seasoned salesperson share their valuable contacts, which directly contribute to their success and compensation? The exact same questions arise when creating a generative AI assistant.
Hence, although Corporate GPT makes a lot of sense on the surface, initiating projects with well-defined and bounded tasks or functions seems a more pragmatic approach … except for companies like Bloomberg who have assembled a unique body of knowledge that can be monetized directly through a model.