LLM

Mixtral 8X7B

Unveiling Mixtral 8x7B: A Sparse MoE language model, surpassing Llama 2 70B and GPT-3.5 with its 32k token context and 47B parameters.

Read More

Phi-2

Microsoft’s Phi-2, a 2.7B parameter AI, outshines models 25x its size, showcasing revolutionary scaling and data curation.

Read More