Don't ask a small LLM about precise minutiae factual information.
Alternatively, ask yourself how plausible it sounds that all the facts in the world could be compressed into 8k parameters while remaining intact and fine-grained. If your answer is that it sounds pretty impossible... well it is.
Oh I saw it, you still have a fundamentally flawed comprehension of LLMs.
The size of the model does not factor as tiny models can use Internet to fetch factual information.
But you think they are accurate repositories of knowledge, even though it's physically impossible unless lossless infinite compression algorithms exist (they don't, can't and won't).
Alternatively, ask yourself how plausible it sounds that all the facts in the world could be compressed into 8k parameters while remaining intact and fine-grained. If your answer is that it sounds pretty impossible... well it is.