Forget about building epic castles or mining for diamonds, on /lmg/ you’ll be training Qwen 2.5 on an 18 trillion token dataset while riding a llama that’s been quantized to b1.58 through finetuning. And let’s not forget about Mistral releasing a new 22B model with 128k context and function calling – it’s like trying to navigate a maze while juggling flaming potatoes.
But that’s not all, we’ve got DataGemma with DataCommons retrieval that will have you scratching your head in confusion and laughing uncontrollably at the same time. And if you think you can handle the madness, check out our benchmarks like the Chatbot Arena where AI’s battle it out, or the Censorbench where things get so censored, you’ll question reality itself.
So if you’re ready to embark on a journey of insanity and chaos, join /lmg/ today and see if you can survive the madness of local language models in Minecraft!