|
A.I. Updates II [Local Suites and LLMs]
|
|
So...I downloaded NVIDIA ChatRTX and Ollama.
ChatRTX is not for conversation, it's context length is only 1,000 token. The suite that's downloaded is the only thing local. The LLM requires internet access.
Ollama is actually local in suite and LLM. But when downloading LLMs it will only let you choose the smallest parameter models. Possibly to encourage their cloud models.
Good news as of today, though. Looks like the A.I. masterminds were actually listening to community feedback:
Intel Core Ultra Series 3 Launch Event Livestream | Intel
|
| |
| |
Posted on : Jan 6, 2026
|
| |
| |
Add Comment
|
|
|
Commented on Jan 6, 2026
"Ollama is actually local in suite and LLM. But when downloading LLMs it will only let you choose the smallest parameter models. Possibly to encourage their cloud models. " Go to the model section of Ollama, find the one you want and execute the pull code in powershell, it will download and be available in Ollama
|
| |
|
|
|
|
|
|
|
|