11-30, 15:15ā15:45 (Europe/Amsterdam), Auditorium
Large language models are all the rage, but building scalable applications with them can be costly and difficult. In this talk, we give you a glimpse at the emerging ecosystem of LLM apps beyond just ChatGPT. In particular, we focus on OSS alternatives, like the Llama model family, and show you how to use them in your own projects. We discuss how to leverage services like Anyscale Endpoints in Python to get LLM apps up and running quickly. To demonstrate this, we showcase two application we built ourselves, namely a GitHub bot that helps you with your pull requests, and an "Ask AI" chatbot that we integrated into our project documentation.
ChatGPT has been trained on Ray, a popular distributed Python framework. Many other companies building their own foundation models, such as Cohere and Eleuther, used Ray in their training process as well. While this talk is focused on applications built on top of LLMs, we emphasise the role Ray can play for efficient parallelisation and distribution, even at inference time. We briefly touch on our new book "Learning Ray" (O'Reilly), for those interested in learning more about Ray as foundational technology.
No previous knowledge expected
Hi there, I'm Max š,
Iām a Data Science & Engineering practitioner from Hamburg, Germany. Iām an avid open source contributor, author of machine learning & technology books, speaker and Coursera instructor.
I specialize in Deep Learning and its applications and can build machine learning solutions and data products from first prototype to production. As Ray contributor, DL4J core developer, Keras contributor, Hyperopt maintainer and author of a range of open source libraries, I have a distinct view on ML engineering and the data science ecosystem.