LLM

Smaug-72B-v0.1

Smaug-72B-v0.1, a top-performing language model, uses DPOP for fine-tuning. Ideal for diverse AI and machine learning tasks.

Read More

Mixtral 8X7B

Unveiling Mixtral 8x7B: A Sparse MoE language model, surpassing Llama 2 70B and GPT-3.5 with its 32k token context and 47B parameters.

Read More