Phi 3 mini 4k

by | May 6, 2024

The Phi-3-Mini-4K-Instruct is a compact, advanced AI model with 3.8B parameters, designed for English language commercial and research applications. It excels in reasoning, language understanding, and long-context tasks. The model is integrated with the latest transformers and supports up to 32064 tokens, facilitating diverse AI-powered features. Phi-3-Mini-4K-Instruct is available in HuggingChat and as an ONNX model for cross-platform compatibility.

The Phi-3-Mini-4K-Instruct is a cutting-edge language model designed for both commercial and research applications. It is a part of the Phi-3 series, and this particular variant boasts 3.8 billion parameters. The model is trained on the expansive Phi-3 datasets, which include synthetic data and high-quality, filtered web data. Its training emphasizes reasoning density and quality, making it adept at handling complex language tasks.
This model has undergone rigorous post-training processes, including supervised fine-tuning and direct preference optimization, to ensure it follows instructions accurately and maintains safety standards. When benchmarked, the Phi-3 Mini-4K-Instruct demonstrates superior performance in areas such as common sense, language understanding, math, code, and logical reasoning, especially when compared to other models with fewer than 13 billion parameters.
Developers should note that while the model is versatile, it is not tailored for all possible use cases. It is crucial to evaluate the model for accuracy, safety, and fairness within the specific context of its application, particularly in high-risk scenarios. Additionally, developers must comply with relevant laws and regulations, including those related to privacy and trade compliance.
The Phi-3 Mini-4K-Instruct is integrated into the development version of the transformers library and is also available on HuggingChat. It supports a vocabulary size of up to 32,064 tokens and is optimized for chat-format prompts. The model is licensed under the MIT license, and while it includes various trademarks, their use must align with Microsoft’s Trademark & Brand Guidelines.
For those looking to implement the model, it is compatible with multi-GPU setups and can be run on specific GPU hardware types. It also supports ONNX runtime across various platforms and hardware, ensuring broad accessibility and optimization for different devices.

Current
MIT License
Instruction-tuned

Comparison 

Sourced on: May 5, 2024

The Phi-3-Mini-4K-Instruct model, despite having only 3.8 billion parameters, demonstrates remarkable performance across various benchmarks, often outperforming larger models. Here are the key highlights:

1) MMLU (5-Shot): Phi-3-Mini-4K-Instruct scores 68.8 compared to GPT-3.5’s 71.4 despite the latter having more than triple the parameters.
2) HellaSwag (5-Shot): It achieves 76.7, which is competitive with larger models like GPT-3.5’s 78.8.
3) GSM-8K (0-Shot; CoT): The model excels with 82.5, outshining Mistral’s 46.4 and closely following GPT-3.5’s 78.1.
4) TriviaQA (5-Shot): Phi-3-Mini-4K-Instruct’s 64.0 is noteworthy, especially when compared to larger models like Llama-3-In’s 75.2 and Mixtral’s 82.2.

Overall, Phi-3-Mini-4K-Instruct’s performance is impressive, especially considering its smaller size relative to other models. It showcases the efficiency of its design and training, making it a robust choice for various applications.

BenchmarkPhi-3-Mini-4K-In 3.8bPhi-3-Small 7b (preview)Phi-3-Medium 14b (preview)Phi-2 2.7bMistral 7bGemma 7bLlama-3-In 8bMixtral 8x7bGPT-3.5 version 1106
MMLU 5-Shot68.875.378.256.361.763.666.568.471.4
HellaSwag 5-Shot76.778.783.253.658.549.871.170.478.8
ANLI 7-Shot52.85558.742.547.148.757.355.258.1
GSM-8K 0-Shot; CoT82.586.490.861.146.459.877.464.778.1
MedQA 2-Shot53.858.269.840.949.65060.562.263.4
AGIEval 0-Shot37.54549.729.835.142.14245.248.4
TriviaQA 5-Shot6459.173.345.272.375.267.782.285.8
Arc-C 10-Shot84.990.791.975.978.678.382.887.387.4
Arc-E 10-Shot94.697.19888.590.691.493.495.696.3
PIQA 5-Shot84.287.888.260.277.778.175.78686.6
SociQA 5-Shot76.67979.468.374.665.573.975.968.3
BigBench-Hard 0-Shot71.77582.559.457.359.651.569.768.32
WinoGrande 5-Shot70.882.581.254.754.255.6656268.8
OpenBookQA 10-Shot83.288.486.673.679.878.682.685.886
BoolQ 0-Shot77.682.986.572.26680.977.679.1
CommonSenseQA 10-Shot80.280.382.669.372.676.27978.179.6
TruthfulQA 10-Shot6568.174.852.15363.260.185.8
HumanEval 0-Shot59.159.154.7472834.160.437.862.2
MBPP 3-Shot53.871.473.760.650.851.567.760.277.8

Team 

The team behind the Large Language Model (LLM) mentioned on the current page is from Microsoft, a verified entity with a strong presence in AI and ML research. This team, comprising 1405 members, has contributed to various projects, including the development of state-of-the-art models and frameworks. One of their notable contributions is the SpeechT5 framework, which addresses multiple audio-related tasks through a unified seq2seq model complemented by modal-specific pre/post-nets1. Another significant project is TAPEX, a pre-training model designed for table-based question answering and fact verification, showcasing their expertise in handling structured data. The team’s work reflects a commitment to advancing the field of machine learning, particularly in natural language processing and speech synthesis, as evidenced by their extensive research and model updates. Their collaborative efforts have resulted in a collection of models and datasets that serve as valuable resources for the broader AI community.

Resources

List of resources related to this product.