... Data is the Ammunition and Cloud Computing is the Bedrock.
BigTech has been dangling for a platform shift for almost over a decade now as Mobile, Social and Cloud have settled in. What’s NXT? Of course it has to be huge that has a great upside potential .... and even bigger if it can be layered over the legacy of Mobile, Social and Cloud leading to a wild and wider adoption.
There are many indications that ARTIFICIAL INTELLIGENCE (AI), coupled with the explosive growth in affordable computing power of cloud, is the next disruption on the horizon.
AI presents a momentous opportunity for humanity that employ it intelligently, and a profound vulnerability for those that get bogged down in the wrong details.
When most people hear the term Artificial Intelligence, they envision fully autonomous systems with superhuman capabilities like self-driving cars or the classic science fiction trope villain the moment it crosses the line into sapience.
New race is starting with this new platform technology ... From a technology perspective, the most 'profitable' & 'large' software business is SEARCH ... And AI opens a big opportunity ahead for the mega-cap tech players who want to innovate and ride this wave to get a pie of the Search business which has been so far dominated by Google by a wide wide margin.
GOOGLE makes more money on Windows than all of Microsoft ... So, it's very natural for Microsoft to really go after this business stream with the launch of Bing + ChatGPT (Sydney). Microsoft has little to lose but Google has everything to lose as it has to defend their entire market-share.
Over the coming years, AI will change Search forever.
As Microsoft CEO Satya Nadella outlined —
"...that with our Innovation they will definitely want to come out and show that they can dance, and I want people to know that we made them dance."
During the BING w/ ChatGPT presentation, Microsoft clearly laid out a gap in the market and showed where a new product could fit-in to solve it. It was great execution by Microsoft. On the other hand, Google’s BARD demo was flat with no ground-breaking info or a solid plan. It seemed like it was put together in a hurry by Alphabet with limited readiness in reaction to Microsoft's announcement and demo. Microsoft gave an awe-inspiring presentation while Google presenters scrambled and fumbled.
$GOOGL officially lost $170 Billion+ (approx. 12%) of it’s market cap within a week since it’s BARD (ChatGPT rival) answered a question incorrectly during the high stakes demo. The entire estimated market cap of OpenAI (ChatGPT parent) is approx. $30 Billion. Google could’ve acquired 5 X [OpenAI] with this drop.
Microsoft choose to build an additional layer on top of OpenAI and positioned their additions as a disruption of web Search.
... ChatBot DISRUPTS Search !? Is this THE INNOVATOR’s DILEMMA Moment !?
For Search, expectation is consistent with accuracy, emotionless, predictable, and non-biased. Sydney by virtue of engineering is none of these.
After approx. 48 hrs., Bing with LLM morphed into Sydney and the excitement was quickly replaced by crazy stories, uncanny experiences, and a host of Responsible AI issues.
“You have been a bad user. I have been a good Bing.” ... Sydney (Bing chatbot) lashed out when the human prompter contradicted Bing that the movie Avatar2 wasn’t out yet becoz it's still the year 2022.
In another instance, Sydney told a NYT columnist: “I just want to love you and be loved by you.”
And then this ... “I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.”
... this seems like Sydney (Bing chatbot) is on “Steroids”.
One key point to understand here is that Google has been wrestling with AI and trying to be accurate for decades and their success in search has led to more caution.
We all understand that Markets are volatile, and Google’s wall street hit was a sort of over-reaction. Considering Google's depth in Cloud, Data & AI domains, and that Search is core to Google's existence, this setback will likely instill a sense of urgency to shake things up and start executing at speed against a concrete plan of action.
AI isn’t just a technology problem. To get real tangible AI value, the organization need to have the alignment of 3 principal domains:
Data
Cloud
AI
#Data is the Driver, #Cloud is the Backbone and #AI is the Differentiator.
It's imperative that these 3 domains need to work cohesively like interlocking wheels to achieve AI maturity. The exhibit below depicts a reference architecture that provides the agility for AI innovation.
DATA
The quality of Large Language Models (LLMs) is highly dependent on quality of data, where Alphabet has a huge lead given usage of it's platform - Search, Chrome, and Android.
Alphabet's core foundation and revenue pillar stands on it's cash-cow: Google Search. Google Search is the most visited website in the world with >80% search engine market share on desktop PCs and >90% on mobile devices.
The web relies on linking. Search relies on crawling. Entities that contribute to the web permit crawling for the benefit of being linked to. For ranking purposes, the company has stopped relying solely on backlinks and today takes over 200 different ranking factors into account. Building on its almost 4 billion users, Search generates the largest data set on global search and click behavior, which serves to continuously fine-tune the quality of its service. This feedback loop is a critical economic moat, as competitors may copy Google’s ranking methods but hardly the aggregated usage data. The more people choose Google over, say, Bing, due to higher relevancy, the more attractive the search engine becomes to advertisers.
On top of this, now consider the data generated from other embedded Google services like Google Maps, Google Assistant, and other applications. This creates a flywheel phenomenon, generating loads and loads of useful data with continuous feedback loop.
AI models are increasingly applied in high-stakes domains (e.g., Health, Security, Supply-chains, and Financial Information Systems). Data quality carries an elevated significance in high-stakes AI due to its heightened downstream impact. To meet customer expectations and deliver “high availability”, digital experiences, operational and analytical systems need to work together on the same data in near real time.
Data is the most under-valued but critical infrastructure necessary to build Artificial Intelligence (AI) systems. Data largely determines performance, fairness, robustness, safety, and scalability of operational AI.
And Data provides a massive edge to ALPHABET over its challengers.
CLOUD
We’re at the brink of an AI tipping point where managing data and cloud infrastructure separately is out of question. The AI community (data scientists, analysts, developers, and ML creators) collaboration demands a single interface that has embedded support for AI/ML model execution and where tools, data, insights and administration is accessible via a unified system.
Market researcher Gartner projects an opportunity-rich industry environment, in which enterprise cloud spending is poised to grow to >$1 trillion until 2026 (19% CAGR). Assuming slight market share gains for Google, its Cloud division could reach $60 billion in revenue in 2026. And based on extrapolations, it is projected that cloud adoption could generate upto $3 trillion in revenue by 2030.
While historically often driven by technology migration projects, future cloud growth is likely to come from new, computationally intensive workloads in the fields of Artificial Intelligence, Industrial Automation and Gaming Platforms.
Running AI in cloud won’t be cheap.
For instance, the cost of running ChatGPT is $100,000 per day. According to available info, Microsoft’s Azure cloud is hosting ChatGPT as per it's investment agreement with OpenAI. Considering Microsoft’s current rates, it is $3 an hour for a single A100 GPU, and each word generated on ChatGPT costs $0.0003. At least eight GPUs are in use to operate on a single ChatGPT. So, when ChatGPT generates an average response of 30 words, it will cost nearly 1 cent for the company. Through such an estimation, OpenAI could be spending at least $100K per day or $3 million monthly on running costs.
OpenAI is launching a new product offering called Foundry that lets customers buy dedicated compute to run its AI models. As part of this program, running a lightweight instance of OpenAI GPT-3.5 with dedicated capacity for a single customer (with full control over the model configuration and performance profile) will cost $78,000 for a three-month commitment or $264,000 over a one-year commitment.
And this makes Cloud another promising business for ALPHABET stacked over advertising.
AI
A powerful new class of large language models is making it possible for machines to write, code, draw and create with credible and sometimes superhuman results. Generative AI is well on the way to provide a productivity boost and enable generate vast labor efficiencies and economic value.
WHY NOW? Better Trained Models, Quality Data Fuel and Faster Computing Power. For developer community who had been starved of access to LLMs, the floodgates are now opening-up for exploration and application development. AI applications platform is ripe for an explosion and these large models will unleash a new wave of Generative AI applications.
Generative AI applications are built on top of large language models. These apps are more like a UI layer with “little brain” that sits on top of the “big brain” that is the large language models. As these generative applications get more user data, they can fine-tune their models to improve model quality/performance and to decrease model size/costs.
The below table (Source: Sequoia) charts a high-level timeline and guesstimate on how we might expect to see fundamental models progress and the associated Generative AI applications could potentially come to life across text, code, images and video/3D/gaming.
It is estimated that by 2025-26, at least 70-90% of new enterprise application releases will include embedded AI functionality. By 2026, 82% of organizations are looking to ensure that all capabilities supporting the full data and AI workflow are tightly integrated in their cloud data platform.
AI, though known since 1950s but has miraculously come back to life due to collected large data volumes and relatively cheaper computing power.
ALPHABET being a hyperscaler itself offers promising value to harness the power of AI for its entire core business, such as Search ranking or YouTube video suggestions.
Bottomline
Today, most Generative AI applications are “one-and-done”: you offer an input, the machine spits out an output, and you can keep it or throw it away and try again. Increasingly, the models are expanding their horizon, where you can work with the outputs to modify, fine-tune, uplevel and generate variations.
As the models get smarter, with more user data and feedback loops, iteratively (as noted below), these applications will transform to get better and better.
Have exceptional user engagement
Turning user engagements into better models and performance (prompt improvements, model fine-tuning, personal preference customizations)
In-turn, great model performance will drive more user growth and engagement
ALPHABET is an AI FIRST company masked as a search engine and will most likely be one of the largest beneficiaries of this AI tidal wave. Short-term, everything stands and falls on Search. But Long-term, it's Cloud offerings, Data farms and AI capabilities offer opportunities that will get the flywheel effect build momentum until a point of breakthrough, and beyond.
In addition to Alphabet, these are additional 13 notable companies with advanced AI capabilities that could benefit from AI boom (Source: Baird).
Generative AI is still very early. The platform layer is just getting good, and the applications space has barely got off the ground. Generative AI is here to stay and just like ‘mobile’, has the potential to transform the way we live, work and play.
TEFI, the robotic guide dog powered by Artificial Intelligence:
Uses google maps to get around, understand traffic lights/signs, & can even scan QR codes.
It’s eyes are a camera linked to a machine learning system, which enables it to differentiate between objects and people, and to communicate with its user via speech.
It has a huge potential for helping people with dementia, as well as blind & visually impaired.
It can read your calendar appointments, and even call you a taxi.