Local AI
Latest about Local AI

Running Ollama on WSL vs. Windows: how does it compare?
By Richard Devine published
AI On Windows 11, you can use Ollama either natively or through WSL, with the latter being potentially important for developers. The good news is, it works well.

What you gain (and lose) with a mini PC like the A9 Max
By Richard Devine published
Review The Geekom A9 Max mini PC is at the pricier end of the spectrum, but when it's an easy replacement for a desktop PC, even for gaming, it makes so much sense.

NVIDIA’s “robot brain” makes iRobot feel one step closer to reality
By Sean Endicott published
AI NVIDIA just announced Jetson Thor, a "robot brain" that powers humanoids, self-driving cars, and smart machines with real-time generative AI.

You don't need to spend a fortune on a GPU to run LLMs in Ollama
By Richard Devine published
AI If you're looking at your PC and wondering what sort of GPU you might need to power local LLMs, the good news is it doesn't have to be as expensive as you think. Allow me to explain.

Why an older GPU might crush a newer one for AI
By Richard Devine published
AI If you're running LLMs locally on your PC using Ollama there's one key hardware spec you need to take into consideration. If not, your performance will tank.

Geekom is getting ready to launch a MAD mini PC that'll take it to the Mac Studio
By Richard Devine published
Hardware Geekom's next mini PC, the A9 Mega, is arguably the first true Windows 11-powered competitor to the Mac Studio. Partly because it looks like it, and partly the earth-shattering performance inside.

NVIDIA’s Project G‑Assist now runs on more RTX GPUs, including laptops
By Cale Hunt published
PC Gaming NVIDIA's Project G-Assist, an AI gaming assistant, is set to receive its first major update to improve performance and to get it running on a wider range of hardware.

How to run AI LLMs locally on your PC with Ollama
By Richard Devine last updated
AI If you want to install and use an AI LLM locally on your PC, one of the easiest ways to do it is with Ollama. Here's how to get up and rolling.

I tried to replicate this Copilot feature with local AI, but it's just not the same
By Richard Devine published
AI Using Copilot to summarize web articles is one of my favorite features. I tried to replicate it using an on-device AI model and it just isn't the same.

Why you NEED to use LM Studio over Ollama for local AI if you use AMD or Intel integrated graphics
By Richard Devine published
AI I've been playing with Ollama a lot recently, but it lacks in one key area that has sent me back over to trying LM Studio, with great success, and no need for dedicated GPU.
All the latest news, reviews, and guides for Windows and Xbox diehards.
