Ofer Mendelovich from Vectara talks about how their platform makes Retrieval-Augmented Generation (RAG) easier for developers. Instead of dealing with the complexity of setting up things like vector databases and embedding models, Vectara handles it all, so developers can just focus on building their apps. He emphasizes that Vectara ensures data privacy—customer data is never used for training or fine-tuning models. Ofer also shares that Vectara recently launched its own LLM, Mocking Bird, which is designed to improve RAG performance and reduce errors. He touches on the growing interest in "agentic RAG," which makes systems more autonomous, and the importance of good data governance, allowing companies to control who accesses which documents. Looking ahead, he’s excited about the potential of future AI models and the challenges around AI safety and regulation.
Neo4j devhub: https://bit.ly/4ghZuIk