--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 412526330 num_examples: 49916 download_size: 141156258 dataset_size: 412526330 --- # LangMap-TheStack-python-100M Code finetuning dataset for **python** streamed from [bigcode/the-stack](https://huggingface.co/datasets/bigcode/the-stack). - **Target tokens**: 100,000,000 - **Tokenizer**: `allenai/OLMo-3-1025-7B` - **Schema**: `{"text": [...]}` (sanitised source code)