From the course: Generative AI for Business Leaders

AI model

- Now, let's discuss the AI model. The algorithm. This is the brain of the operation. It defines how the model will learn patterns and relationships within the data, and then apply that knowledge to perform an intellectual task. In the case of generative models, it uses the knowledge it has gained to create new content in the form of images, text, video, audio, and more. With recent developments in AI, we are now experiencing a new foundational model that sets the basis for many AI systems that we'll use in the future. A foundational model is considered a new paradigm for building AI systems. It becomes a starting point for a wide range of downstream tasks. Looking back at the categorical evolution of AI systems, we started with the rule-based algorithms and those made decisions based on a set of explicit logical rules. That later evolved to search and optimization algorithms, that focused on efficiently exploring solution spaces and finding the optimal results. Think of a search engine. Then, we had machine learning algorithms, such as supervised, unsupervised, and reinforcement learning, which were used to analyze and classify data. Those really paved the way for massive progress in computer vision and natural language processing. Machine learning then branched into deep learning algorithms, which became the state of the art technology in image classification, object detection, and natural language understanding. GPT, the Generative Pre-trained Transform we discussed before, sets the ground for a new foundational model. It is a large language model that is trained on a massive amount of text data and is able to generate new text in response to user input. It is based on transformers, which is a deep learning architecture that was invented in 2017 and became very popular over time. In my previous course, I mentioned this as one of the most exciting developments in AI. Transformers are a type of neural network that uses a technique called self-attention to identify what parts of an input, sets of an article, are most essential, focusing on the relevant parts and ignoring the rest. This in turns helps the network better understand the input and makes more accurate predictions. Transformers are used in a variety of natural language tasks, from generating text based on input prompts, think about ChatGPT, to summarizing or translating text from one language to another with high accuracy. For example, I recently used GPT when I published a post in English and did a translation to Hindi and the result was incredible. Instead of simply translating the words, the model understood the intent of my post, and then rewrote that in Hindi, in the style of a native speaker. I got many compliments for my Hindi. Another recent foundational model is the diffusion model. Diffusion models have emerged in recent years as a powerful new approach to generative modeling. Specifically, in the field of image generation. They work by gradually transforming a starting image, usually starting from random noise, into a target image, like a photograph, through a series of small, randomly determined steps. Due to their ability to produce photorealistic images, diffusion models have potential applications in a wide range of domains. For instance, they could be used to accurately identify or classify objects in images, restore images, and generate rich media for use in many applications. Lastly, another promising area of AI research is called Reinforcement Learning from Human Feedback. This approach uses signals from human evaluators, such as upvotes or downvotes, to improve the performance of an AI model. By incorporating this feedback, the model learns to adjust its decisions based on how well they align with human expectations. This approach can be particularly effective in domains where it's difficult to define clear objective functions for the model to optimize for. For example, a social media moderation AI might use reinforcement learning from human feedback to better identify offensive or harmful content, and then limit its spread on the platform. Now, there's another very valuable tool for refining AI models, which is called prompt engineering, but the application of it goes far greater. It's a whole new interaction model, so let's learn about that next.

Contents