Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
191
103k
domain
dict
toxicity
dict
quality_score
float64
0.17
0.99
为了您的交易安全,建议您选择诚信商或供求商会员进行贸易为了您的交易安全,建议您选择诚信商或供求商会员进行贸易企业名称: 深圳市嘉应锡业制品厂 企业类型:个人独资企业 经营模式:生产/制造 所 在 地:广东 深圳市 联 系 人:刘明 发布日期:2006-11-11 9:50:51 有效期至:2006-12-11 9:51:001 :本产品不燃不爆 ,无毒 ,无刺激性气体.2 :焊后易清洗,可用水洗工艺.3 :可用于手浸,涂刷等焊接工艺.
{ "single_label": "general", "multi_label": [ "general" ] }
{ "label": 0, "score": 0.000010167443178943358 }
0.218506
广东省湛江市赤坎区人民法院民 事 裁 定 书(2018)粤0802民初3253号原告中国银行股份有限公司湛江分行,住所地:湛江市人民大道北71号.负责人盘仲廷,该分行行长.委托代理人杨国兴,男,广东东方昆仑律师事务所律师.委托代理人刘浩海,男,中国银行股份有限公司湛江分行员工.被告张捷军,男,1970年12月13日出生,汉族,住广东省湛江市霞山区,原告中国银行股份有限公司湛江分行诉被告张捷军金融借款合同纠纷一案,本院于2018年11月9日受理后,并于当天向原告送达了《预交受理费通知书》,明确告知其应于收到该通知书后7日内向本院预交案件受理费918.8元,但在该期限内原告未向本院预交上述费用,亦未向本院提交缓交申请.据此,依照《最高人...
{ "single_label": "law", "multi_label": [ "law" ] }
{ "label": 0, "score": 0.00002043940003204625 }
0.983887
法学院开展"疫情中的热点法律问题研讨"学术沙龙为加强学术氛围,探索教育发展,2022年4月20日法学院于人文楼会议室举办了第一期学术沙龙活动.本次学术沙龙以"疫情中的热点法律问题研讨"为主题,由董学智老师担任主持人,学院众多师生参与本次沙龙活动.的学术分享.他阐述了风险与法律系统的共构逻辑、风险概念进入法律的分析逻辑两部分内容,着重分析了法律以及法律理论现在能够为风险做什么的问题.郭浩老师阐明了风险社会语境下的分类与特征,并分享自己对风险的解读与反思.最后,参会老师们就风险主题进一步展开深入探讨并发表自我看法,加深了同学们对风险社会视域下的法概念再构建的理解.学术沙龙的第二议程,首先由林诚老师发言,他通过不同视角探究了新冠疫情相关保...
{ "single_label": "encyclopedia", "multi_label": [ "encyclopedia", "education", "dialogue" ] }
{ "label": 0, "score": 0.000013869972462998703 }
0.442627
非承载式车身中的发动机、传动系统和车身等总成部件都是固定在底盘大梁架上的,车架则通过前后悬架装置与车轮连接,所以非承载式车身不具备灵活性,高速行驶时稳定性不高思文败类.质量大、高度高,发生碰撞时车架能够吸收大部分冲击力,能够给车身带来很强的保护作用;独立大梁使其拥有优秀的抗颠簸性能,所以很多硬派SUV用得比较多.承载式车身只是对车身各个方向做了加强,发动机、变速箱和悬架等部分装配在车身内陈怡真,所以承载式车身除了固有的乘载功能外,还要承受内部各个部件的负荷鬼娃新娘.整个车身为一体,公路行驶性能稳定恋曲哆来咪,噪音小、震动低比较舒适,虽然装配容易,但是制造成本高鲸刑.曹小小总体来说全城高考,非承载式车身是好几层的老外汉堡包,而承载式车...
{ "single_label": "encyclopedia", "multi_label": [ "encyclopedia", "news" ] }
{ "label": 0, "score": 0.00001034192700899439 }
0.261475
四、确认条件(一)不应通过国家中小学教师资格考试,笔试、试镜皆合格,取得《中小学教师资格考试合格证明》,且在有效期内.2011年及以前入学,在学期间因参军入伍(学校保有学籍)等原因,于2018年或2019年毕业的全日制普通高等学校师范类专业本(专)科毕业生可免试直接参与确认.(二)不应遵从宪法和法律,热衷教育事业,遵守《教师法》规定的义务,遵从教师职业道德.(三)不应不具备分担教育教学工作所必须的科学知识以及运用所学科学知识分析和解决问题教育教学实际问题的能力.(四)不应不具备《教师法》规定的适当学历.2019年申请人幼儿园教师资格的学历条件之后限制到我区全日制中等师范学校、中等职业学校和技工院校的学前教育(幼儿教育)专业毕业生,这...
{ "single_label": "education", "multi_label": [ "education", "technology" ] }
{ "label": 0, "score": 0.00002035363831964787 }
0.987305
现代快报讯(通讯员 王雪颖 朱玫烨 记者 张晓培)为规范婴幼儿卫生用品的生产经营行为,保障儿童的身体健康.六一前夕,徐州市卫生监督所组织全市各级卫生监督机构对辖区内婴幼儿卫生湿巾、婴幼儿尿不湿、纸尿裤和尿片等婴幼儿卫生用品的生产企业、经营单位开展了监督巡查和抽检工作.据了解,本次检查共出动卫生监督员 238 人次,随机抽查了全市范围内 108 家婴幼儿卫生用品产品生产企业、经营单位的 263 种产品.重点检查婴幼儿卫生用品的标签说明书是否符合要求,包括原料、生产日期、有效期(保质期)、生产批号、生产商、产地等详细信息;销售婴幼儿卫生用品的经营单位是否开展进货索证验证;是否索取消毒产品生产企业卫生许可证复印件和产品卫生安全评价报告复印...
{ "single_label": "general", "multi_label": [ "general", "news" ] }
{ "label": 0, "score": 0.000015601479390170425 }
0.985352
读者留言我们将及时通过您留下的Email地址进行回复,请注意查收,谢谢您的支持和参与!请在下面填写您的留言信息:读者留言我们将及时通过您留下的Email地址进行回复,请注意查收,谢谢您的支持和参与![内容] 贵社可有出版《方正飞腾》排版系列的教学光盘?多年前贵社出版的《photoshop图像处理》那样的. 你好!我最近购买了贵社出版的《基础会计》作者:陈国辉,按照书后面的提示到贵社网页上下载课件不成功,不知道是什么原因?贵社能否直接将该课件提供到我的电子信箱里?拜托!谢谢! 你好,请问能否买到贵社多年前出版的计算机方面关于cobol语言的书籍?由于该语言较为古老,故几乎没有新书发行.如贵社尚有珍存,望能给予邮购,不胜感激! 我购买了...
{ "single_label": "news", "multi_label": [ "news" ] }
{ "label": 0, "score": 0.000010352752724429592 }
0.612793
在基层组织建设年活动中,三岔镇水源小学支部以打造全县一流的乡村校园育人环境作为年初诺职一项重要内容,为践行承诺,党员教师们处处以身作则,身先士卒,率先垂范,以无私的奉献和辛勤的劳作耕耘着校园的每一块土地,取得了可喜的成绩.先后被中央电视台新闻频道、《光明日报》、教育时空、贵州电视台等媒体作专题报道.已年届五旬的党支部书记吴明俊被大家誉为"我们的好班长",在整个校园育人环境打造工作中,事必躬亲.从方案策划、材料采购、人员分工上都亲力亲为,经常忙得饭都顾不上吃,饥肠辘辘了才想起没有吃饭.校长、党支部副书记陈关友更是感到"家难当,当家难".在资金不足的情况下,本着少花钱、多办事的原则,和老师们商量沟通,倡议大家积极参加义务劳动,得到老师们...
{ "single_label": "education", "multi_label": [ "education", "news" ] }
{ "label": 0, "score": 0.00003383060538908467 }
0.452637
该六角丝堵工件是由通过加工而得,基本加工原理就是刀具和工件以3:1的转速比进行复合运动,实现了圆棒类工件近似平面化的加工.同时铣方机可以安装刀架或刀塔对工件进行其他车序的加工,可以实现传统车床的车、钻、攻丝、铰等所有功能.这种加工方式已被越来越多的生产型企业所采纳.龙口市蓝牙数控装备有限公司专注铣方机研发生产24年,致力于为生产型企业提供定制化、自动化的终极解决方案,欢迎来电咨询业务,24小时客服电话:0535-8858025.
{ "single_label": "technology", "multi_label": [ "technology" ] }
{ "label": 0, "score": 0.000010015216503234114 }
0.303467
中卫光亮bob2021年9月27日,中卫光亮生态聪慧牧场首批1800头荷斯坦奶牛远渡重洋,颠末严厉的45天断绝检疫期,顺遂落户中卫.奶牛入栏典礼在中卫市沙坡头区常乐镇胜利举办.光亮乳业董事长濮年光光阴,中卫市农业乡村局局长景兆珍,中卫市沙坡头区代区长宗立冬,光亮乳业副总裁王赞等指导列席本次举动.本次入栏中卫光亮生态聪慧牧场的奶牛是光亮牧业自2016年以来,首批入口的荷斯坦奶牛,在场的预会指导配合见证了这一冲动民气的时辰.光亮乳业在新开展格式下,对峙新颖计谋,促进奶牛全财产链开展.牧场从良种奶牛豢养、良种培养、鲜奶消费为主营标的目的,对峙科技立异,引领行业开展.别的,宁夏奶业科创中间建立也准期停止,将建成包罗"奶牛消费机能测定中间"、...
{ "single_label": "general", "multi_label": [ "general", "technology" ] }
{ "label": 0, "score": 0.001121568726375699 }
0.315674
End of preview. Expand in Data Studio

ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information

This directory contains the ChineseWebText2.0 dataset, and a new tool-chain called MDFG-tool for constructing large-scale and high-quality Chinese datasets with multi-dimensional and fine-grained information. Our ChineseWebText2.0 code is publicly available on github (here).

ChineseWebText2.0

  • Dataset Overview

We have released the latest and largest Chinese dataset, ChineseWebText 2.0, which consists of 3.8 TB of data. Each text in the dataset is accompanied by a quality score, domain single-label and multi-label tags, as well as toxicity classification and scores, enabling LLM researchers to select data based on new quality thresholds.

  • Data Example

    {
    "text": "近日,黑龙江省高校校报协会第十四届学术年会暨校报工作交流研讨会在东北农业大学举行。我校10件新闻作品喜获2项一等奖,2项二等奖,6项三等奖……",
    "domain":
       {
          "single_label": "news",
          "multi_label": ["news", "education"]
       },
    "toxicity":
       {
          "label": 0,
          "score": 1.0347155694034882e-05
       },
    "quality_score": 0.96044921875
    }
    
  • "text": [string] Text content of data sample.

  • "single_label": [string] The highest probability label generated by the domain classification model.

  • "multi_label": [list] All labels generated by the domain classification model with probabilities higher than the threshold.

  • "label": [int] Toxicity label generated by toxicity classification models.

  • "score": [flaot] Toxicity score generated by toxicity classification model, samples with scores exceeding 0.99 were categorised as toxic.

  • "quality_score": [float] Quality score generated by the quality evaluation model.

MDFG-tool

Introduction

We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.


Figure 1: The pipeline of MDFG-tool.

Data Analysis

Removal Rate for Different Stages

In order to provide a high-level overview of the preparation and preprocessing stages, the figure followed shows the processing workflow and the removal rate of each step. This figure details the removal ratio of data from the previous step and the absolute percentage of the remaining data from the original collected dataset, thereby facilitating readers in tracking the various processing stages from the raw data to the high-quality dataset.

After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.


Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.

Data Quality Distribution


Figure 3: The Data Analysis on Quality Evaluation.

Quality Distribution To investigate the quality distribution, we calculate the data proportions across different quality score ranges from our ChineseWebText 2.0 dataset. Figure 3(a) shows the proportion of data across different quality score intervals. The data is primarily concentrated in the mid-range score intervals ([0.2, 0.4)), each contributing approximately 18%. Additionally, a significant proportion lies within the high-quality interval ([0.9, 1.0)), reflecting the presence of high-quality content in the dataset. In contrast, the lowest interval ([0.1, 0.2)) contains only a minimal fraction, indicating a limited amount of poor-quality data. Note that the quantity of quality scores in the range [0, 0.1) is zero, so this interval has been omitted. This quality distribution provides a valuable reference for LLM researchers, enabling them to select data based on desired quality thresholds.

Human Acceptance Evaluation To validate the consistency between quality evaluation and human judgments, Figure 3(b) displays human acceptance rates across different score intervals, showing a clear positive trend: higher scores correlate with higher acceptance rates. Specifically, the highest score interval ([0.5, 1.0)) achieves an acceptance rate exceeding 90%, while the lowest interval ([0.1, 0.2)) still maintains an acceptance rate of 80%. This trend highlights the overall high quality of the data.

In summary, the dataset is primarily concentrated in the mid-quality range, with higher scores strongly correlating to greater human acceptance. This alignment underscores the dataset's potential for high-quality applications, where consistency in human-like quality is essential.

Domain Distribution

To investigate the distribution of our dataset across different domains, in this section, we conduct an in-depth analysis of the data distribution across eleven distinct domains: book, dialogue, education, encyclopedia, finance, law, math, medicine, news, technology, and general. This analysis considers two perspectives: the overall domain distribution and the quality-related domain distribution, providing comprehensive insights into the dataset's composition across different domains.

Overall Domain Distribution

As illustrated in Figure 8, the sample counts and corresponding proportions across various domains are presented. The Encyclopedia, General, and News domains dominate the dataset, comprising 33.43%, 32.63%, and 28.01% of the data, respectively. In contrast, the Math domain has the smallest share at 0.55%, yet it still includes over 8 million samples. Figure 9 complements this with a bar chart that provides a more intuitive visualization of the data distribution. This comprehensive domain distribution enables LLM researchers to select suitable datasets, facilitating the enhancement of the model’s knowledge and capabilities in specific domains.


Figure 4: Data Distribution Across Different Domains.

Quality-Related Domain Distribution In order to explore the domain distribution across different quality intervals, we perform an analysis focusing on the quality-related domain distribution. Specifically, we calculate the proportions of various domains within each quality interval. As shown in Figure 5, this table provides a detailed breakdown of domain proportions across different quality intervals. From the results, we observe that the distribution of domain data within each quality interval aligns closely with their overall distribution in the dataset. Based on the proportions in Figure 5, researchers can filter domain-specific data within targeted quality intervals, enabling the extraction of higher-quality domain-specific data subsets.


Figure 5: Table of Domain Distribution Across Quality Levels

Data Toxicity Analysis


Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.

During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure 6, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41% of the dataset.

Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019, AbusEval, HatEval, RAL-E and ToxiGen are English toxicity datasets, while COLD, ToxiCN, SWSR and CDial-Bias are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.


Figure 7: Table of Comparison of Different Toxicity Datasets.

Citation

Please cite the paper if you use the data or code in this repo.

@misc{zhang2024chinesewebtext20largescalehighquality,
      title={ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information}, 
      author={Wanyue Zhang and Ziyong Li and Wen Yang and Chunlin Leng and Yinan Bai and Qianlong Du and Chengqing Zong and Jiajun Zhang},
      year={2024},
      eprint={2411.19668},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.19668}, 
}
Downloads last month
2,314

Paper for CASIA-LM/ChineseWebText2.0