small-doge

0

Doge Family of Small Language Model

Productivity

pytorch
python
nlp
mechine-learning
samlldoges

visitors arXiv huggingface License: Apache-2.0

Small Doges is under construction, let's develop together!๐Ÿ•๐Ÿ•๐Ÿ•

English | ็ฎ€ไฝ“ไธญๆ–‡

small-doge

News: ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ We now support the full training process of pre-trained Doge-Base, instruction fine-tuned Doge-Instruct, and reasoning fine-tuned Doge-R1, please refer to the guide!

  • This project aims to train a series of dynamic and fast small models from scratch, with the fastest training time of only 3 hours! You can train a tiny language model Doge-20M in just 13M!๐Ÿš€
  • The small doge series is extremely lightweight, with the smallest version being about $\frac{1}{7800}$ the size of GPT3, and strives to make even the most ordinary personal GPU capable of fast inference and even training.๐ŸŽ๏ธ
  • We provide full-stage code for dataset preprocessing, pre-training, supervised fine-tuning, reinforcement learning preference alignment, visual multimodal VLM (under development), and inference fine-tuning R1 (under development).๐Ÿงช
  • Standing on the shoulders of giants can see further, we hope that the small doge series of small models can provide researchers with more ideas and contribute to the road to achieving Embodied Artificial General Intelligence.๐Ÿค–

[!TIP] We hope to use open-source tools and frameworks as much as possible to simplify the process from data processing to model training, so that beginners can easily understand and use.๐Ÿค—

streamlit
Doge-60M-Instruct on an 11th gen i7 CPU notebook for fast inference

About

This project aims to develop a series of dynamic and fast small models to promote their application in the field of embodied intelligence, especially in resource-constrained environments, to meet real-time response needs, and to promote the practical application of downstream fields.

[!TIP] As of 2025-2-20: The small doge series has completed the pre-training of 3 model models, with a minimum of 20M, which can have smooth conversation capabilities!

Modeltokensmax_train_stepsbatch_sizelearning_rateschedulerwarmup_ratiodecay_ratioweight_decaymin_lr_rate
Doge-20M4B8,0002568e-3warmup_stable_decay0.10.10.010.0
Doge-60M16B16,0005126e-3warmup_stable_decay0.10.10.010.0
Doge-160M32B24,0007684e-3warmup_stable_decay0.10.10.010.0

The following one model are currently in pre-training, and researchers with the capability are welcome to help(poor man's cry)!๐Ÿ™

Modeltokensmax_train_stepsbatch_sizelearning_rateschedulerwarmup_ratiodecay_ratioweight_decaymin_lr_rate
Doge-320M64B32,00010242e-3warmup_stable_decay0.10.10.010.0
drawing

As shown in the figure, the sequence transformation part of the Doge architecture uses Dynamic Mask Attention, which can be understood as using self-attention related to value states during training, and using state-space without past state decay during inference, to solve the problem of existing Transformers or SSMs getting lost in long text. The state transformation part of Doge uses Cross Domain Mixture of Experts, which consists of dense linear layers and sparse embedding layers, and can additionally increase sparse parameters to continue training from dense weight checkpoints without retraining the entire model, thereby reducing the cost of continuous iteration of the model. In addition, Doge also uses RMSNorm and Residual with learnable parameters to adapt the gradient range of deep models.

Dynamic Mask Attention Module

DMAttn DMAttn

Cross Domain Mixture of Experts Module

CDMoE CDMoE

Requirements

Our codebase requires the following environment if you need to pre-train or fine-tune:

  • Windows or Linux
  • NVIDIA GPU
  • Python 3.10+
  • PyTorch 2.0+
  • CUDA 11.8+

We highly recommend that you install the latest version of PyTorch and CUDA for optimal performance.

Of course, you can also use the open-source Docker PyTorch image to avoid the hassle of configuring the environment.

docker pull nvcr.io/nvidia/pytorch:24.12-py3
docker run --privileged --gpus all -it --name PyTorch --shm-size=32g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 -v <your code path>:/workspace -v <your datasets path>:/workspace/Doge/datasets nvcr.io/nvidia/pytorch:24.12-py3
  • pip install transformers: The core framework for all subsequent work.
  • pip install datasets sentencepiece boto3: Used to download and process datasets.
  • pip install accelerate: Used for distributed training.
  • pip install trl: Used for fine-tuning with reinforcement learning.

Installation

git clone https://github.com/SmallDoges/small-doge.git
cd small-doge
pip install -e .

Quick Start

We have written a notebook and a training guide to demonstrate the entire process of dataset processing, model training, and model evaluation. You can also use the models that have been released independently. If you are interested, please read the notebook or training guide in detail, which contains specific steps and details!

Models Released

Doge-CheckPoint

wsd_scheduler

Doge uses wsd_scheduler as the training scheduler, which divides the learning rate into three stages: warmup, stable, and decay. It allows us to continue training on any new dataset from any checkpoint in the stable stage without spikes of the training.

Here are the initial learning rates required to continue training at each checkpoint:

ModelLearning RateScheduleWarmup StepsStable Steps
Doge-20M8e-3wsd_scheduler8006400
Doge-60M6e-3wsd_scheduler160012800
Doge-160M4e-3wsd_scheduler240019200
Doge-320M2e-3wsd_scheduler320025600

Doge-Base

Pre-Training:

ModelTraining DataStepsContent LengthTokensLRBatch SizePrecisionRTX 4090 GPU hours
Doge-20MHuggingFaceTB/smollm-corpus8k20484B8e-30.5Mbfloat1614
Doge-60MHuggingFaceTB/smollm-corpus16k204816B6e-31Mbfloat16128
Doge-160MHuggingFaceTB/smollm-corpus24k204832B4e-31.5Mbfloat16522

Evaluation:

ModelMMLUTriviaQAARCPIQAHellaSwagOBQAWinograndetokens / s on i7-11 CPU
Doge-20M25.40.0329.858.427.325.650.2142
Doge-60M26.40.237.961.431.528.050.862
Doge-160M29.24.844.466.338.734.452.228

Doge-Instruct

SFT:

ModelTraining DataEpochsContent LengthLRBatch SizePrecision
Doge-20M-Instruct-SFTHuggingFaceTB/smoltalk220488e-40.25Mbfloat16
Doge-60M-Instruct-SFTHuggingFaceTB/smoltalk220486e-40.25Mbfloat16

DPO:

ModelTraining DataEpochsContent LengthLRBatch SizePrecision
Doge-20M-InstructHuggingFaceH4/ultrafeedback_binarized210248e-50.125Mbfloat16
Doge-60M-InstructHuggingFaceH4/ultrafeedback_binarized210246e-50.125Mbfloat16

Evaluation:

ModelIFEval (Prompt Strict Acc)MMLUBBHARCPIQAHellaSwagtokens / s on i7-11 CPU
Doge-20M-Instruct7.326.318.329.257.827.8142
Doge-60M-Instruct7.427.527.737.561.432.162
Doge-160M-Instruct16.829.729.142.864.137.128

Training Environment:

  • Image: nvcr.io/nvidia/pytorch:24.12-py3
  • Hardware: 1x NVIDIA RTX 4090
  • Software: Transformers, TRL

Expectations

[!IMPORTANT]

  • If you find this project helpful, please consider giving it a star โญ!

  • Due to time and expertise constraints, there may be omissions in the project. Feel free to submit your insights through Issues or PRs to help improve the project, your support is the driving force behind the continuous progress of the project!๐Ÿ˜Š

  • One person can go fast, but a group of people can go further. If you have already trained a new small-doge model, feel free to share your model weights, training recipes, evaluation results, and other relevant information in Discussions or Issues. It can be a new small-doge model version for specific downstream tasks or vertical fields, such as sentiment recognition, medical, psychological, financial, legal Q&A, etc. It can also be an expanded training, such as exploring new small-doge model versions with longer text sequences, larger parameters, or larger datasets. Your sharing will greatly promote the development of the community!๐Ÿš€๐Ÿš€๐Ÿš€

Star History

Star History Chart

Citation

If you use this codebase, or otherwise find our work valuable, please cite our paper:

@misc{shi2024wonderfulmatrices,
      title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture}, 
      author={Jingze Shi and Bingheng Wu},
      year={2024},
      eprint={2412.11834},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2412.11834}, 
}