From the course: Hands-On AI: Building LLM-Powered Apps
Unlock the full course today
Join today to access over 23,000 courses taught by industry experts.
Solution: Putting it all together - Python Tutorial
From the course: Hands-On AI: Building LLM-Powered Apps
Solution: Putting it all together
- [Instructor] Welcome back to the solution portion of our lab. Let's navigate to app/app.py where we need to implement our chain. Because we want to retrieve with sources, so we will use the RetrievalQAWithSourcesChain. We will initialize it from chain_type, and we will pass in a large language model. And the large language model is the model we define here. We will want to set the temperature to zero, specifically because temperature ranges between zero to two, where zero means the model is not going to be very imaginative, and when temperature is two, it means model can produce more creative answers. Since we are talking to a PDF, we do not need the model to be too creative. Then, we will set the chain type to "stuff". This is the default type for chain and stuff means we will send all of the retrieved documents into the context. And for retriever we will set it to the search engine we have built previously, and we use it as retriever, and we will set the max_token_limit to 4097…
Contents
-
-
-
-
(Locked)
Retrieval augmented generation3m 30s
-
(Locked)
Search engine basics2m 32s
-
(Locked)
Embedding search3m
-
(Locked)
Embedding model limitations3m 15s
-
(Locked)
Challenge: Enabling load PDF to Chainlit app48s
-
(Locked)
Solution: Enabling load PDF to Chainlit app5m 4s
-
(Locked)
Challenge: Indexing documents into a vector database1m 50s
-
(Locked)
Solution: Indexing documents into a vector database1m 43s
-
(Locked)
Challenge: Putting it all together1m 10s
-
(Locked)
Solution: Putting it all together3m 17s
-
(Locked)
Trying out your chat with the PDF app2m 15s
-
(Locked)
-
-