Peter Smink
Team Rockstars IT/ASML
More than 37 years of experience as software/system developer.
View
FrameworkDesktop and IncusOS, a perfect combination for running LLMs locally
Byte size (BEGINNER level)
Zaal 4
If your are a seriously developing AI Applications, a framework desktop server running IncusOS is a must have.
I will discus the pro and cons of such a setup,
go through the setup process and issues I run into when setting it up and using it.
I will give a demo how to you can run your own AI application on a laptop that use AI models running on the Incus server.
key take aways
- you have a guide how to setup your own configuration and what issues you can encounter
- you have an good impressing how this setup can help you own AI development
- a solution to run LLM locally for privacy reasons
- a solution to run LLM locally on a GPU for performance
- a solution to keep cost under control. If your application has many mcp tools the costs per call are high when using public AI's
- a solution that can run LLM that do not fit in graphical cards
- You are a devops and just want to see how a modern tools like Incus can be used for running any container or VM
- You want to be eco-friendly, to control/reduce your own energy usage for running AI
Target audience:
AI developers, Devops, Anyone who wants to run LLMs locally in a relative efficient way.
Searching for speaker images...
