Back arrow Knowledge Center

Decoding Artificial Intelligence and Its Impact on Investing

Decoding Artificial Intelligence and Its Impact on Investing

The debut of ChatGPT in November 2022 marked a turning point in the world of artificial intelligence.    As the language model emerged into the digital limelight, it ignited a global frenzy of curiosity and anticipation.  ChatGPT produces responses spanning from everyday practical queries to the imaginative and surreal.  It has drafted cover letters, woven lines of poetry, written wedding vows, and emulated the literary style of William Shakespeare.

What sets ChatGPT apart is its exceptional ability to engage in contextually relevant and natural flowing dialogues with users, making it seem human-like in its interactions.  In essence, it encapsulates the term “artificial intelligence” (often shortened to AI).

ChatGPT falls under the umbrella of a subfield of artificial intelligence called Generative AI.  Here, the technology possesses the ability to “generate” content in a manner that is remarkably human-like.  In addition to text, generative AI is able to produce images from textual descriptions.  For instance, it can create pictures of “a cityscape on the back of a giant turtle, gliding through the cosmos, in the style of Van Gogh”.  Generative AI can also compose music, generate videos and animations, and create marketing materials.  And we are only scratching the surface of what is possible.

The accessibility of ChatGPT broadens AI’s appeal.  For the first time, people from diverse fields with little technical backgrounds can experience the capabilities of AI firsthand.  The public’s perception of AI shifts from a distant and abstract concept to a more relatable and practical tool.

While ChatGPT represents a significant advancement, the potential of AI is far-reaching, and extends well beyond generating conversation.  This article is a deeper dive into the technology topic than we normally write, as we seek to demystify the dynamic realm of AI.

What is AI, and how it differs from traditional programming

In its simplest form, AI refers to machines or programs that can perform tasks that would typically require human-like level of intelligence.  Imagine computers not just following rigid instructions, but actually learning from experience.  That’s AI in a nutshell.

AI differs from traditional computer programming in the way they approach, learn from, and process information.  Traditional programming follows predetermined instructions, with clear rules and logic that dictate how a computer solve specific problems or perform specific tasks.  They execute well within a predefined scope.  However, they would require explicit manual coding for each new scenario or modification, making them less adaptable to changes.

For AI, they are trained on large datasets to identify patterns, and make decisions based on the data they have collected.  They can also learn from experience and improve over time, and can handle variations and nuances in data without needing explicit instructions.  This ability to “learn on the job” is called machine learning.  AI excels in handling complex and unstructured data, making it suitable for tasks like image recognition, natural language processing, and decision-making in dynamic environments.

Consider a self-driving car that navigates through traffic, adapting to changing road conditions seamlessly.  It can read and interpret traffic signs, and adjust their speed and behavior accordingly to comply with road safety rules.  It has advanced image recognition systems that help identify pedestrians and predict their movements.  It can recognize and make the quick decision on whether to stop for a torn paper bag versus a dog standing on the road, a task more intricate and demanding for AI than might be anticipated due to the inherent variability in the appearance of these objects.

As AI continues to evolve, its real-world potential remains vast.  As we embrace AI’s promises, we must also keep an eye on responsible innovation.  Ensuring the ethical use of AI, safeguarding data privacy, guarding against misuse, and addressing bias in AI algorithms are crucial considerations that demand attention.

Nvidia and its role in AI

As your advisors, we proactively seek and embrace any potential opportunities in the investment landscape.  We hold a reasonable degree of enthusiasm for the growth potential of the AI industry, and believe it presents a wealth of opportunities for our clients.  We expect the technology to have a wide-reaching influence, and envision numerous companies reaping its benefits.  Let’s now explore one such company in greater detail:  Nvidia.

Most AI models are incredibly complex, consisting of millions or even billions of parameters.  In many AI applications, the ability to process enormous amounts of data at speed may be crucial.  In short, modern AI models require colossal amounts of computing power.

Nvidia initially gained recognition as a supplier of high-performance graphics chips primarily used for PC gaming and graphics-intensive programs.  Over the years, it has evolved to be the enabler of just about every computationally intensive field, accelerating progress and making it one of the most influential players in the AI ecosystem.

There are two primary chips that perform computational functions for computer systems – the central processing unit (CPU), and a graphics processing unit (GPU).  The CPU is the brain of the computer, executing instructions and performing calculations to make a computer work.  Like the conductor of an orchestra, it coordinates all the actions of the computer’s hardware, and ensures everything runs smoothly.

A GPU is a specialized computer chip designed for simultaneous multitasking; a concept known as parallel processing.  This capability proves invaluable for computationally demanding tasks related to graphics and visual processing.  Essentially, GPUs act as an additional computational powerhouse alongside the CPU, handling intensive calculations that the CPU may struggle to manage alone.  Nvidia stands out as a leading supplier of GPUs in the market.

For several decades, CPU performance saw rapid improvements thanks to the increasing number of transistors on chips.  In recent years however, the semiconductor industry has been encountering the limitations imposed by the law of physics, making it increasingly challenging and expensive to integrate more transistors onto chips.  Consequently, progress in traditional CPU-based approaches has slowed.

Somewhere around early-mid 2000s, developers realized that GPUs, with their parallel processing capabilities, could be applied beyond graphics and handle extensive computations with large data.  Not long after, GPUs took center stage from CPUs in terms of performance delivery.  This revelation spurred a surge in adoption of GPUs across various domains, including complex modeling, research, data analytics, cryptocurrency mining, and notably, AI.  The move from graphics to general purpose computing has become integral in the advancement of AI, surpassing the GPU’s original role in graphics processing.

One of the challenges with GPUs was their difficulty in programming and manipulation for developers.  That changed when Nvidia introduced Compute Unified Device Architecture, or CUDA for short.  In simple terms, CUDA can be thought of as a proprietary software platform that extends the usability of Nvidia’s GPUs to developers for a wide range of processing tasks; not limited to graphics-related work.  CUDA achieved backward compatibility with the hundreds of millions of Nvidia GPUs already on the market, leading to widespread adoption due to Nvidia’s consistent investment in the ecosystem.  The launch of CUDA has played a pivotal role in democratizing AI development and has created a substantial competitive advantage for Nvidia, especially in the data center.

There is no meaningful alternative to CUDA, which provides Nvidia with a competitive moat and enables the company to dominate the enterprise GPU market.  However, as computing demands continues to surge, the collaboration of the GPU, CPU and DPU (Data Processing Unit) – often referred to as the “holy trinity” of computing – becomes increasingly vital.  To this effect, Nvidia has expanded its presence and embraced the role of a “three chip company”.  Through its purchase of Mellanox in 2020 – its largest acquisition to date, Nvidia developed its Bluefield range of DPUs that are designed to efficiently offload workloads, thus freeing up valuable CPU resources.  More recently, Nvidia announced its entry into the CPU market with the launch of the Grace CPU, making them the only company that can deliver a “full-stack” system.

Nvidia’s Data Center segment sells both hardware and increasingly software solutions to support high-performance computing (HPC) workloads.  Their clientele spans a wide spectrum, including hyperscale cloud providers like Amazon Web Services, Microsoft Azure, Google Cloud, and Alibaba Cloud, as well as enterprise customers and research institutions that rely on supercomputers for complex scientific research.  While AMD, Google, Amazon, and Meta have also produced AI chips, Nvidia today accounts for more than 70 percent of AI chip sales, and holds and even bigger position in training generative AI models, according to the research firm Omdia.

We own Nvidia stock as part of our Ascent Global Growth strategy.  While we have recently reduced our exposure to capitalize on the meteoric rise in share prices, we remain committed to the investment.  AI has undoubtedly generated a substantial amount of hype, leading to potential misconceptions and unrealistic expectations.  However, we believe AI is a tangible reality.  It has indeed made significant strides in recent years, finding practical applications in various industries.  At Ascent, we consistently monitor AI’s progress and explore other emerging technologies, always in pursuit of the best investment opportunities on behalf of our clients.