Discover how to elevate text2cypher by advancing Cypher query generation through Large Language Models (LLMs). We'll explore the nuances of In-Context Learning, including few-shot learning and dynamic prompting with LangChain. Additionally, we'll dive into fine-tuning techniques, such as PEFT and LoRA, to guide you through dataset preparation and fine-tuning with Unsloth. This session is ideal for refining LLMs for precise and efficient data retrieval in Neo4j.
Guest: Geraldus Wilsen
LinkedIn https://www.linkedin.com/in/geraldus-wilsen/
@geralduswilsen
Github: https://github.com/projectwilsen/neo4j_live
Tomaz Github: https://github.com/neo4j-labs/text2cypher/tree/main
Blog: https://www.linkedin.com/posts/geraldus-wilsen_how-to-fine-tune-llms-using-unsloth-text2cypher-activity-7194930369744232448-YeC9/
Few-Shot Prompting: https://blog.langchain.dev/few-shot-prompting-to-improve-tool-calling-performance/
llama 3.1 405b: https://build.nvidia.com/meta/llama-3_1-405b-instruct
0:00 Introduction
1:27 Importance of understanding Cypher
3:20 Introduction of the guest, Geraldus Wilsen
7:01 Inspiration to the talk
9:02 Overview of text2cypher
10:56 Explanation of in-context learning and the role of few-shot learning
12:33 Enhancing text2cypher with in-context learning and fine-tuning.
17:50 Q&A break
23:40 text2cypher Demo
#neo4j #graphdatabase #genai #llm #graphrag