Local AI
Latest about Local AI

Why you NEED to use LM Studio over Ollama for local AI if you use AMD or Intel integrated graphics
By Richard Devine published
AI I've been playing with Ollama a lot recently, but it lacks in one key area that has sent me back over to trying LM Studio, with great success, and no need for dedicated GPU.

We gave AI a kid’s exam. It tried its best.
By Richard Devine published
AI OpenAI has finally released some AI models folks can use at home on their local machines, so I decided to see if it was better at a test designed for children than my own kid.

AI on a 7‑year‑old laptop. Because why not.
By Richard Devine published
AI While most of my local AI work takes place on a fairly well-equipped desktop PC, I was curious to see what mileage I could get from an old laptop I have lying around, and it surprised me.

Even With a Beastly GPU, Ollama Can Lag — Here’s What You’re Missing
By Richard Devine published
AI Ollama is one of the easiest ways to integrate localized AI LLMs into your daily workflow, but you might be leaving performance on the table by making one crucial mistake, like I was.
All the latest news, reviews, and guides for Windows and Xbox diehards.