A primer on how popular AI models work and their current limitations

AI and the Economy

A primer on how popular AI models work and their current limitations

Read a summary or generate practice questions using the INOMICS AI tool

Artificial intelligence tools are becoming more and more ubiquitous with each passing week. The development of ChatGPT-3.5 in particular launched conversations about the capabilities of AI into the mainstream. Since then, news stories and speculation about AI supplanting (and sometimes replacing) human workers continue to be released. The economics of AI is, as a result, becoming an important topic in our field.

As AI continues to feature prominently in the news cycle, there’s bound to be speculation about the future. Indeed, AI’s potential to disrupt the labor market has been perhaps the biggest subject of discussion. But often, these conversations are had by people who haven’t studied the actual workings of these new AI tools. In many news articles or public discussions, the actual capabilities of current AI tools can be poorly defined or understood, particularly the capabilities of large language models (LLMs) as ChatGPT has nearly become a household name.

Therefore, it’s worth spending time to learn how current LLMs work, so that we will all have a better understanding of how LLMs might realistically shape the economy as these conversations continually arise. This article will therefore focus on LLMs, though there are other types of AI still being researched and developed.

Suggested Opportunities

How do these LLMs work?

ChatGPT, and many models like it – such as Google’s Bard (or LaMDA), Microsoft’s Bing AI, and a host of others – is a large language model (LLM). LLMs are algorithms that generate text based on training data that is given to the model during development.

These models are usually made up of a neural network – a software structure modeled after the way human neurons function. Specifically, these models make use of transformer architecture, a specific type of neural network developed by data scientists interested in deep learning.

These models can respond to questions that humans ask them, and generate (usually) accurate responses. They “learn” when people provide feedback, helping the model to re-weight certain responses if they were incorrect.

The core process at work for all these activities is an algorithm that aims to accurately predict the most likely word that comes next. When queried, the model builds a response by using probabilities to determine those most likely next words. It will do this repeatedly until the response is finished.


Picture credit: Freepik.de

Once a response is finished, certain parameters can be changed by the professionals training the model to make it respond differently (i.e. more accurately) to the same queries. Over time, this training process results in the development of a complete and powerful model like ChatGPT-4 that has been trained with massive amounts of data (specifically 45 gigabytes, whereas GPT-3 was trained with "just" 17 gigabytes of data), and can answer many prompts in a useful way.

Training data partially defines the usefulness and limitations of AI models

Naturally then, training data is incredibly important for the development of AI models based on LLMs. That’s because the specific training data used is influential in determining what responses the model will produce. This has a few key implications for the limitations of these tools.

First, human knowledge and expertise is required in order to build a useful LLM. People can speculate if AI tools will eventually be able to replace human workers – like replacing doctors’ diagnoses, for instance.

But in order to create such an expert LLM, a large amount of medical information and knowledge (in this case) is needed. At least for now, this information must be compiled by humans. And, it pays to have subject-matter experts (such as medical doctors) test the output of any model to ensure it’s accurate. This suggests that there may be net-new jobs for subject matter experts to assist with the development of subject-specific AI tools.

Another consequence of the importance of training data is that biases and prejudices can become internalized by AI models if not dealt with. LLMs don’t exhibit critical thinking skills in the way humans do (at least for now). This means that a new LLM will “learn” any bias, prejudice, or myth as fact if those biases are present in their training data.

Clearly, this can produce incorrect or unfavorable results as the models are deployed, if the biases are not detected beforehand. This is another reason why carefully cultivated training data is valuable.

The fact that useful training data is required to build a good LLM tool means that the intentional selection and maintenance of high-quality, relevant training data – and the ability to minimize any biases the tool might learn from that data – will become a critical need in the economy. This can create an important new role for human experts in various fields. And, certain companies might specialize in providing subject-matter specific training data for AI models that others can purchase or subscribe to.


Image credit: DCStudio on Freepik.de

Will we ever develop an AI that’s as smart as we are?

One common mistake that people make about AI models, in particular LLMs, at the moment is attributing too much independent intelligence to the tools. It’s easy to think that an AI model is a thinking being or individual without understanding how the model works, especially given people’s penchant for personifying things.

However, no type of AI has yet reached the point where it can be considered an independently intelligent, thinking entity. AI experts have coined the term “artificial general intelligence” for such a theoretical entity, as opposed to “narrow” intelligence that is good at a specific task, like controlling a car, giving media recommendations based on a user profile, or responding to questions in a human-like way.

If and when a more “general intelligence” AI does exist, it would theoretically be able to grow, evolve, and adapt to new functions beyond its basic structure. It would be able to “think” and learn, self-critically improve its own functioning, reason similar to how humans do, and tackle any problem a human can. Expert opinions vary on the likelihood of such a model being possible, and when it would be developed if so. Interested readers can check out this article on the Conversation to read a few experts’ thoughts on the subject.

Other limitations: hallucinations and calculators

Especially given that AI models have not reached a human level of intelligence (yet), they will continue to have clear limitations in practice. LLMs in particular occasionally struggle to differentiate between fact and fiction. It’s quite possible to convince an LLM of something patently false, which will potentially derail its future text generation.

Sometimes, LLMs can even invent fake sources or facts; this is known as “hallucination”. Hallucination can be difficult to spot if the user doesn’t have expertise in that field of knowledge. And, this can cause serious problems; imagine if an LLM-based AI tool gave a medical diagnosis that was not only incorrect, but completely fabricated!

New users often attempt to use LLMs like ChatGPT as a kind of search engine, as well. It may appear that these models can instantly search the Internet to provide an answer to any question. But, the hallucination problem, and understanding how the underlying algorithm works, quickly reveals that “search” results in an LLM cannot be trusted.

This is because the models don’t actually search the Internet (at least, most of them at the time this article was written; in the future, it’s likely that many models will have full up-to-date Internet access). They simply generate words repeatedly using probabilities based on the training data that was used to build them; it’s entirely possible, even likely, that a search question posed to an LLM will be incorrect or entirely hallucinated. Moreover, the models were trained on a now-out-of-date block of text taken from the Internet, so even if the model does produce a correct answer, any time-sensitive information will quickly become obsolete without continual updates.

A related issue is that LLMs are (again, at least for now) bad at math, and cannot be used as calculators. This is often surprising to new users, but it makes perfect sense when considering the actual algorithm that composes models like ChatGPT.

These models, again, predict the next most likely word. They don’t recognize math symbols as being uniquely different than any other word in our everyday language. As such, these models don’t actually compute any math; rather, they create strings of responses that have a high likelihood of being “correct”.

When that comes to regular language, this is often fine, allowing LLMs to generate large blocks of coherent text. But when it comes to the very precise, rules-based language of mathematics, this is a major limitation for LLMs.

However, other AI models have been developed (and will be increasingly refined) that are optimized for math (and coding, though LLMs are better at coding than math at this moment). It’s possible that LLMs become more reliable in these capacities in the future, too, but at the moment their responses shouldn’t be blindly trusted.

Update: 16 December 2023: Researchers have just successfully solved an "unsolvable" math problem using AI. The article "Mathematical discoveries from program search with large language models", early access to which was published in Nature on 14th December 2023 (the final paper is not published as of this writing) details how a team of researchers using Google's DeepMind used two AI tools in tandem to find the solution. It's noteworthy that this process still presented some of the above limitations of large language models. In short, the LLM tool used by the research team proposed millions of suggestions, and upon evaluation of those answers, only the best responses were kept. Then the problem was started anew. Over the course of several days, this eventually led the algorithm and the research team to a correct answer to the unsolvable problem.

This demonstrates two things: first, the hallucination problem is still present, as millions of responses had to be thrown out until the right one was reached. This also demonstrates the emergence property that transformer models sometimes display; in other words, a functionality that was not explicitly coded (solving an unsolvable math problem) was nevertheless a function the model could do. Clearly, AI tools will continue to evolve well beyond the original scope of this article. Their usage in everyday organizations have the potential to greatly increase economic efficiency, though naturally they pose a creative destruction risk to many industries as well.

So, what can AI do right now?

As of this writing, LLM AI models are very good at rote tasks and producing messages. It can be quite useful to have an LLM generate an email or similar message quickly. It’s relatively straightforward to get them to write in a specific style, or emphasize a certain tone, as well.

But these tasks aren’t the only thing LLMs can do. They can be used to automate customer service, write software, contribute to research and development, perform sales operations, summarize large datasets quickly, “read” and quickly digest any given block of text, and more. This has the potential to shake up many industries, including software development, call centers and customer support, and more.

AI tools have also notoriously been used to generate images and art with large degrees of success. These models usually aren’t just LLMs; they use LLMs to help with natural language processing, then other algorithms to help generate images from text.

It’s true that the limitations discussed above have prevented LLMs from completing more impressive tasks on their own, like writing a best-selling novel (though many AI-generated “nonsense” books have flooded digital book lists lately), or publishing research. However, these tools have already greatly diminished the work required for such tasks in many cases.

AI writing tool assistants have already become fairly advanced, although they aren’t quite producing works independently yet. For example, several research papers published this year have noted that ChatGPT was used to help write them (although prominent journals like Science have declared that ChatGPT cannot be an author). These tools can also help someone write a (non-“nonsense”) book in mere days, by generating the foundation of a story that generally just needs editing. This could drastically change the writing and publishing industries, or cause writers and editors to start competing with one another as LLM tools perform much of the actual writing. Regardless, it may only be a matter of time before the next Game of Thrones is written by an LLM.


Image credit: Pixabay.

Emergence and the future of AI

Regardless of the current deficiencies or limitations, it’s clear that AI tools are becoming a permanent fixture of the economy. They have a wide array of applications and use cases as aforementioned.

And, if history is any indication, AI tools will only get better from here. The idea of an artificial general intelligence was already discussed, and while current AI models do not exhibit this type of ability, they do embody another phenomenon like it.

Emergence in computer science can be defined as a model demonstrating new abilities that it was not explicitly trained to do. For example, in this article one researcher writes about how complex LLMs were able to correctly identify movies based on only a set of emojis. Often, we don’t quite understand how the underlying mechanism – in this case, the neural network’s probability-based word selection – produces the emergent abilities.

If the concept of emergence seems odd, consider it from a biological perspective. Humans are made up of trillions of cells, each of which has a defined structure and a specific set of tasks within the body. Biologists and chemists have described with great detail how proteins, carbohydrates, lipids, and nucleic acids (the four basic building blocks of biology) operate within our bodies. We understand how substances can move across cell membranes, how blood carries oxygen, etc.

But, it’s extremely difficult to describe how consciousness develops from the basic biological and chemical processes occurring in our bodies. In this way, consciousness can be thought of as an emergent property derived from the physical biochemistry of a human being.

In a similar way, LLMs have demonstrated some capabilities that are hard to trace back to the underlying neural network. These emergent capabilities tend to appear in large models, but not smaller ones. Often, AI researchers describe a tipping point where a large enough model can suddenly perform new tasks that weren’t possible before.

Emergence – and the fact that widely used, publicly-available LLMs are relatively recent – suggests that AI models will only continue to improve in the future. It’s likely that the current limitations of LLMs and other types of AI models will be minimized over time as we get better and better at training them, and as we discover increasingly efficient means of building AI models.

Like much technological progress in the economy, time will likely be kind to the growth of AI. But, it still pays to be informed about how these models actually work, in order to have meaningful conversations about their impact on the future economy.

Other references

https://www.mlq.ai/what-is-a-large-language-model-llm/

Header image credit: rawpixel.com on Freepik.de.

You need to login to comment

INOMICS AI Tools

The INOMICS AI can generate an article summary or practice questions related to the content of this article.
Try it now!

An error occured

Please try again later.

3 Practical questions, generated by our AI model