5 Must-Ask Questions for AI Enterprise Strategy
By Julian Waters-Lynch
10-Second Summary
As the AI revolution accelerates, businesses must develop strategies that resonate with their core competencies, value propositions and competitive advantage.
To guide leaders in pinpointing opportunities tailored for their enterprise we introduce the AI Enterprise Tech Stack, complemented by five strategic questions. The cornerstone of a successful AI strategy lies in an honest assessment of core strengths, judicious evaluation of trade-offs, and alignment with a company’s distinct competitive edge. (We know you’re busy, so click here to jump straight to them.)
The AI Enterprise Tech Stack
To understand where opportunities might lie for your enterprise, it’s crucial to have a basic mental model of the underlying technology stack that enables customers to interact with AI. This will allow businesses to discern where and how they can best leverage AI in ways that strengthen their core competency and distinct advantages. Here’s a simple model of the various components of the AI Enterprise Tech Stack (1), from the bedrock of computational power to the interfaces that customers and employees might interact with. We’ll describe each layer briefly before we unpack the consequences for business strategy.
Compute Infrastructure Layer:
This is the foundational hardware that powers everything, at the moment this is dominated by a rush on Graphics Processing Unit (GPU) chips, which has caused NVIDIA, the leading producer of these chips, to become the sixth most valuable company in the world. However, new specialised hardware technologies will likely replace these in the future.
Cloud Infrastructure Layer:
These are the centres where data is stored, accessed, and scaled. This is largely dominated by tech giants like Amazon Web Services, Microsoft Azure and Google Cloud.
Foundational Training Data Layer:
These are the vast data sets used to build the current generation of Large Language Learning models. Companies like OpenAI and Google’s DeepMind have scraped much of the data openly available on the internet to train their models
Foundational Large Language Model (LLM) Layer:
These are the foundational, general-purpose language models that will power a variety of AI applications. The major examples include OpenAI’s GPT, Google’s PaLM, or Anthropic’s Claude, but there will likely be many different examples in the future. These models are trained on vast amounts of text data, enabling them to respond flexibly to inputs and generate human-like text in response.
Proprietary Data Layer:
This is where specific non-publicly available datasets can be collected or curated by companies, which can be used to train or fine-tune AI models for specific needs. Think how Netflix uses viewer data to improve its recommendation algorithms or how Tesla collects driving data to refine its self-driving algorithms. Many companies will have proprietary data that could be used to fine-tune existing foundation models.
Fine Tuned Data Layer:
Here foundational LLMs can be tailored for specific industries or tasks. If you imagine the foundational LLMs as akin to a super smart general graduate student, fine-tuning is like training them for a year to specialise in job or industry specific knowledge. Whatever the field - law, medicine, engineering etc - their general knowledge can be refined on specialised expertise, able to understand the jargon and adhere to the norms and tone of a profession or role. Companies such as Hugging Face (2) and Taylor AI (3) are starting to specialise in helping enterprises fine-tune models to fit their specific needs.
Enterprise Integration Layer:
This is where AI is interwoven with existing enterprise systems. This means integrating APIs (Application Program Interfaces) and SDKs (Software Development Kits) and chaining together various plugs and tools that enable AIs to access real time data. This enables LLMs to interact with real time or private data outside of their original training set.
Prompt Engineering Layer:
This layer involves crafting and curating sets of prompts that enable users to get the highest quality outputs out of LLMs. It has been compared to programming in prose rather than code. Well crafted prompts tailor the AI’s user interactions, refining how the AI understands the goals and generates high quality outputs. It’s a rapidly evolving field, and many companies such as Prompt Layer (4) are building tools to make this process more systematised and scalable for both individuals and enterprises.
User Application Layer:
This layer is the final touchpoint where end-users engage with AI - be it through chatbots, virtual assistants, or other applications. Today’s norm is the chat window where users type queries. But remember the early days of computing, when text-command prompts were the primary interaction method. This method was then transformed by the Desktop Metaphor and Graphical User Interface (GUI) which changed how most of us interact with computers. The drag-and-drop feature brought an unprecedented level of intuitiveness to our interactions, by adapting the familiar action of moving objects on a desk. Just as these breakthroughs reshaped computer interactions, we expect parallel leaps in AI user interfaces. Think of a voice command as easy to access as Siri but with the smarts of ChatGPT, capable of interacting with all your apps, scheduling appointments, replying to emails, and more. I suspect this is the vision that Apple has in mind with its LLM project, dubbed ‘AppleGPT’.
The AI Opportunity Spectrum: Picks and Shovels or Coca Cola?
When a powerful new technology platform arrives - whether railroads, the internet or generative AI - two questions arise, spanning opposite ends of the opportunity spectrum.
The first is infrastructure oriented. As the saying goes, ‘in a gold rush, invest in picks and shovels’. Given the anticipated demand for this technology, what essential inputs, infrastructure or tools does it require that we have an advantage in supplying or acquiring. The logic here is simple: as the technology ignites a rush for opportunities, regardless of which businesses thrive or falter, all will need these foundational elements. For railroads, these were inputs like steel and coal. For the internet and web, the essentials included servers, domain registration services and broadband infrastructure. When it comes to AI, the critical underlying infrastructure are computation, cloud storage, and organic (human-generated) data, pivotal for the foundation models themselves.
While few companies can challenge behemoths like Nvidia in the GPU chip domain, or AWS and Microsoft in cloud infrastructure, there exists a significant opportunity in the realm of data for some. Foundational LLMs constantly seek fresh, human-generated ‘organic’ data to evolve. However, ironically due to the rise of LLMs, the internet is increasingly saturated with ‘synthetic’ or AI-produced content (5). Intriguingly, training AIs on synthetic data can degrade their performance over time, making genuine sources of organic data invaluable to further improve future models (6).
There have been several different approaches to the pressing demand for fresh organic data. Reddit, a rich reservoir of organic content, has recently restricted its API to prevent free data scraping for LLM training. Now if you want access to their data, you need to pay (7). But these approaches can also backfire, Zoom recently had to retract an update to its terms and conditions that would’ve allowed it to record user conversations for AI training purposes, after an outcry from users (8).
Some companies possess proprietary datasets that mainstream AI models have yet to encounter. In domains like finance, medicine or law, these specific data troves can catalyse new models that eclipse their generic peers, designed for multi-purpose uses. A prime example is BloombergGPT, forged from Bloomberg’s extensive archives of financial data. We will likely see more bespoke models crafted for major fields like medicine, law and perhaps more niche sectors. But for most firms, the more pragmatic approach will likely be to fine-tune existing models using their specialised data, enhancing precision and domain-specific relevance, rather than building a new model from scratch.
The other end of the opportunity spectrum lies the convergence of knowledge, expertise, and innovation. Take, for example, the symbiotic relationship between the advent of refrigeration and the rise of Coca-Cola. At their core, these two innovations may seem worlds apart, yet refrigeration provided the pivotal infrastructure that paved the way for Coca-Cola’s meteoric ascent as a global business titan. The concept here is simple: groundbreaking technologies lay the groundwork, ushering in a myriad of novel business horizons. Companies tapping into these prospects don’t necessarily need to delve deep into the technical minutiae, produce their own refrigeration units, rather, their advantage lies in recognising their potential and building new value propositions upon them. Often, the new opportunities lie beyond the imagination of the technology’s originators. While the foundational aspects of AI are becoming better-defined, the innovative ventures built atop them remain wide open, a vast expanse of possibilities yet to be explored.
The AI Strategy Blueprint: Five Pivotal Questions
There’s very few businesses that can afford to ignore the current AI revolution. The task for business leaders is to work out where AI fits into their enterprise and strategy. The decision is not just about where to place their bets, but also how to weave a strategic narrative that aligns with their organisations core strengths and future vision. The following questions can help guide leaders in mapping out their AI journey.
1. The Role of AI in Enterprise
What is the potential for AI to become a foundational element of our business model, or do we see it as a supplementary feature?
As AI transforms industries, do we envision ourselves more as infrastructure or data providers, as innovators creating unique applications atop this technology, or simply as a business seeking to automate and streamline internal operational tasks for efficiency gains through AI?
2. Data and Infrastructure Positioning
Given our proprietary data, to what extent can this be a competitive advantage in the AI landscape?
How can we ensure our data infrastructure is robust, secure, and scalable to support our AI endeavours?
3. Leveraging expertise and knowledge
How might our unique domain expertise be harnessed to benefit our current or potential new customers through AI?
In what ways can we embed our accumulated tacit knowledge into AI prompts or models to ensure they resonate with our organisational insights?
4. Monetisation and Business Impact
If we integrate AI-driven products or services, how do we plan to monetise them: directly through sales/licensing, or indirectly via enhanced user experience and retention?
Do we aim to use AI predominantly to replace human roles, or to aid and amplify human capabilities?
5. Future Vision and Organisational Adaptability
How prepared is our organisation to embrace the changes AI might introduce in terms of job roles and decision-making processes?
With the rapid evolution of AI, where do we project our organisation to be in the next 5-10 years in terms of AI integration and innovation?
How do we foresee AI enhancing, altering, or negating our organisation’s core competencies, and how might this impact our long-term strategy and market position?
Enterprise AI Strategy: mapping the journey
Strategy has always been about carving out a distinct competitive edge to win. Central to this is the recognition of a company’s defining strengths amidst an ever-shifting landscape. As emerging technologies like AI come to the fore, the terrain of business is transformed, included by new products, changing customer preferences, and the manoeuvres of competitors. The shifting terrain is constantly throwing up new threats and fresh opportunities.
These strategic questions are intended as an initial guide to start developing an enterprise AI strategy, grounded in a company’s distinct value proposition and competitive advantage.
Leaders must confront the strategic questions AI presents directly. For many businesses this will mean building new capabilities, rethinking roles, hiring for emerging skills, and sometimes redefining value propositions. Crafting an effective strategy means recognising trade-offs, and embracing the difficult task of making choices, especially about what not to pursue. True leadership in the AI age will lie in discerning which paths align with a company’s core competency, and having the discipline to not get led astray by shiny new distractions.
REFERENCES >
(1) Special thanks to Ben Le Ralph from Handmade AI for his collaborative insights and refinements that enriched the development of this model.
(2) Hugging Face – The AI community building the future.
(4) PromptLayer - The first platform built for prompt engineers
(6) [2305.17493] The Curse of Recursion: Training on Generated Data Makes Models Forget (arxiv.org)
(7) Reddit will start charging for API access to rebuff LLMs • The Register