Smaug-72B-v0.1
Smaug-72B-v0.1, a top-performing language model, uses DPOP for fine-tuning. Ideal for diverse AI and machine learning tasks.
Read MoreSmaug-72B-v0.1, a top-performing language model, uses DPOP for fine-tuning. Ideal for diverse AI and machine learning tasks.
Read MoreIntroducing Nous-Hermes 2: A groundbreaking AI, surpassing Mixtral Finetune with superior performance in GPT4All and AGIEval benchmarks.
Read MoreUnveiling Mixtral 8x7B: A Sparse MoE language model, surpassing Llama 2 70B and GPT-3.5 with its 32k token context and 47B parameters.
Read More