Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Podcast #7 RAG Integration and AI Future Insights with Ofer Medelevitch of Vectara

This week we delve into topics such as the importance of accurate data feeding to Large Language Models (LLMs) to reduce hallucinations, the capabilities of vector databases, and tools like Vectara

Reducing LLM Hallucinations with Retrieval Augmented Generation: A Conversation with Ofer Mendelevitch

In this episode, Joshua Schoen of Work AI discusses the complexities and innovations in the field of Retrieval Augmented Generation (RAG) with Ofer Mendelevitch from Vectara.

They delve into topics such as the importance of accurate data feeding to Large Language Models (LLMs) to reduce hallucinations, the capabilities of vector databases, and how Vectara provides a comprehensive RAG-as-a-Service solution. Offer also shares his journey from working with early GPT models to co-founding Vectara. They explore various use cases for Vectara's technology in legal, healthcare, and other sectors, and end with a demo showcasing the practical applications of RAG in legal document management. 00:00 Introduction to AI and Hallucinations

01:29 Offer's Journey with GPT and

Syntegra

02:42 Joining Vectara and Its Mission

03:06 Vectara's Growth and Funding 03:31 What is Vectara? 06:54 RAG vs. Fine-Tuning

09:30 Reducing Hallucinations with RAG

12:39 Building a RAG Pipeline with Vectara

14:45 Customer Use Cases and Future of LLMs

18:51 Live Demo of Vectara's Capabilities

22:00 Conclusion and Final Thoughts

Working it with AI
Working with AI
Conversations with leading technologists, Venture Capitalists, specialists, and game-changing companies.