id int64 1.14B 2.23B | labels_url stringlengths 75 75 | body stringlengths 2 33.9k ⌀ | updated_at stringlengths 20 20 | number int64 3.76k 6.79k | milestone dict | repository_url stringclasses 1
value | draft bool 2
classes | labels listlengths 0 4 | created_at stringlengths 20 20 | comments_url stringlengths 70 70 | assignee dict | timeline_url stringlengths 70 70 | title stringlengths 1 290 | events_url stringlengths 68 68 | active_lock_reason null | user dict | assignees listlengths 0 3 | performed_via_github_app null | state_reason stringclasses 3
values | author_association stringclasses 3
values | closed_at stringlengths 20 20 ⌀ | pull_request dict | node_id stringlengths 18 19 | comments sequencelengths 0 30 | reactions dict | state stringclasses 2
values | locked bool 1
class | url stringlengths 61 61 | html_url stringlengths 49 51 | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,231,400,200 | https://api.github.com/repos/huggingface/datasets/issues/6793/labels{/name} | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work li... | 2024-04-08T14:39:14Z | 6,793 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-08T14:39:14Z | https://api.github.com/repos/huggingface/datasets/issues/6793/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6793/timeline | Loading just one particular split is not possible for imagenet-1k | https://api.github.com/repos/huggingface/datasets/issues/6793/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/165930106?v=4",
"events_url": "https://api.github.com/users/PaulPSta/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulPSta/followers",
"following_url": "https://api.github.com/users/PaulPSta/following{/other_user}",
"gists_url": "ht... | [] | null | null | NONE | null | null | I_kwDODunzps6FAHcI | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6793/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6793 | https://github.com/huggingface/datasets/issues/6793 | false |
2,231,318,682 | https://api.github.com/repos/huggingface/datasets/issues/6792/labels{/name} | It was reloading from the wrong cache dir because of a bug in `_check_legacy_cache2`. This function should not trigger if there are config_kwars like `sample_by=`
fix https://github.com/huggingface/datasets/issues/6758 | 2024-04-08T15:55:21Z | 6,792 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-08T14:05:42Z | https://api.github.com/repos/huggingface/datasets/issues/6792/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6792/timeline | Fix cache conflict in `_check_legacy_cache2` | https://api.github.com/repos/huggingface/datasets/issues/6792/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6792.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6792",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6792.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6792"
} | PR_kwDODunzps5sBEyn | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6792). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6792/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6792 | https://github.com/huggingface/datasets/pull/6792 | true |
2,230,102,332 | https://api.github.com/repos/huggingface/datasets/issues/6791/labels{/name} | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 Th... | 2024-04-09T01:30:55Z | 6,791 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-08T01:57:03Z | https://api.github.com/repos/huggingface/datasets/issues/6791/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6791/timeline | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/huggingface/datasets/issues/6791/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/40491005?v=4",
"events_url": "https://api.github.com/users/NeuralFlux/events{/privacy}",
"followers_url": "https://api.github.com/users/NeuralFlux/followers",
"following_url": "https://api.github.com/users/NeuralFlux/following{/other_user}",
"gists_url"... | [] | null | null | NONE | null | null | I_kwDODunzps6E7Kk8 | [
"I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?",
"Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embed... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6791/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6791 | https://github.com/huggingface/datasets/issues/6791 | false |
2,229,915,236 | https://api.github.com/repos/huggingface/datasets/issues/6790/labels{/name} | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggi... | 2024-04-07T20:00:54Z | 6,790 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-07T19:25:39Z | https://api.github.com/repos/huggingface/datasets/issues/6790/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6790/timeline | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | https://api.github.com/repos/huggingface/datasets/issues/6790/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25725697?v=4",
"events_url": "https://api.github.com/users/lasuomela/events{/privacy}",
"followers_url": "https://api.github.com/users/lasuomela/followers",
"following_url": "https://api.github.com/users/lasuomela/following{/other_user}",
"gists_url": "... | [] | null | null | NONE | null | null | I_kwDODunzps6E6c5k | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6790/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6790 | https://github.com/huggingface/datasets/issues/6790 | false |
2,229,527,001 | https://api.github.com/repos/huggingface/datasets/issues/6789/labels{/name} | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reaso... | 2024-04-08T09:37:28Z | 6,789 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-07T02:52:06Z | https://api.github.com/repos/huggingface/datasets/issues/6789/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6789/timeline | Issue with map | https://api.github.com/repos/huggingface/datasets/issues/6789/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/102672238?v=4",
"events_url": "https://api.github.com/users/Nsohko/events{/privacy}",
"followers_url": "https://api.github.com/users/Nsohko/followers",
"following_url": "https://api.github.com/users/Nsohko/following{/other_user}",
"gists_url": "https://... | [] | null | null | NONE | null | null | I_kwDODunzps6E4-HZ | [
"Default `writer_batch_size `is set to 1000 (see [map](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.map)).\r\nThe \"tmp1335llua\" is probably the temp file it creates while writing to disk.\r\nMaybe try lowering the `writer_batch_size`.\r\n\r\nFor multi-processing ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6789/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6789 | https://github.com/huggingface/datasets/issues/6789 | false |
2,229,207,521 | https://api.github.com/repos/huggingface/datasets/issues/6788/labels{/name} | ### Describe the bug
Hello,
I have a question regarding the map function in the Hugging Face datasets.
The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.ten... | 2024-04-06T11:52:39Z | 6,788 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-06T11:45:23Z | https://api.github.com/repos/huggingface/datasets/issues/6788/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6788/timeline | A Question About the Map Function | https://api.github.com/repos/huggingface/datasets/issues/6788/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/87431052?v=4",
"events_url": "https://api.github.com/users/ys-lan/events{/privacy}",
"followers_url": "https://api.github.com/users/ys-lan/followers",
"following_url": "https://api.github.com/users/ys-lan/following{/other_user}",
"gists_url": "https://a... | [] | null | null | NONE | null | null | I_kwDODunzps6E3wHh | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6788/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6788 | https://github.com/huggingface/datasets/issues/6788 | false |
2,229,103,264 | https://api.github.com/repos/huggingface/datasets/issues/6787/labels{/name} | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will de... | 2024-04-08T14:47:18Z | 6,787 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-06T06:25:39Z | https://api.github.com/repos/huggingface/datasets/issues/6787/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6787/timeline | TimeoutError in map | https://api.github.com/repos/huggingface/datasets/issues/6787/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps6E3Wqg | [
"From my current understanding, this timeout is only used when we need to get the results.\r\n\r\nOne of:\r\n1. All tasks are done\r\n2. One worker died\r\n\r\nYour function should work fine and it's definitely a bug if it doesn't."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6787/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6787 | https://github.com/huggingface/datasets/issues/6787 | false |
2,228,463,776 | https://api.github.com/repos/huggingface/datasets/issues/6786/labels{/name} | PR for issue #6782.
Makes `cast_storage` of the `Image` class faster by removing the slow call to `.pylist`.
Instead directly convert each `ListArray` item to either `Array2DExtensionType` or `Array3DExtensionType`.
This also preserves the `dtype` removing the warning if the array is already `uint8`. | 2024-04-08T09:18:42Z | 6,786 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T17:00:46Z | https://api.github.com/repos/huggingface/datasets/issues/6786/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6786/timeline | Make Image cast storage faster | https://api.github.com/repos/huggingface/datasets/issues/6786/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https:... | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6786.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6786",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6786.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6786"
} | PR_kwDODunzps5r3kWg | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6786). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6786/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6786 | https://github.com/huggingface/datasets/pull/6786 | true |
2,228,429,852 | https://api.github.com/repos/huggingface/datasets/issues/6785/labels{/name} | See https://github.com/huggingface/dataset-viewer/issues/2650
Tell me if it's OK, or if it's a breaking change that must be handled differently.
Also note that the docs page is still https://huggingface.co/docs/datasets-server/, so I didn't change it.
And the API URL is still https://datasets-server.huggingfac... | 2024-04-08T12:41:13Z | 6,785 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T16:37:05Z | https://api.github.com/repos/huggingface/datasets/issues/6785/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6785/timeline | rename datasets-server to dataset-viewer | https://api.github.com/repos/huggingface/datasets/issues/6785/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | null | null | CONTRIBUTOR | 2024-04-08T12:35:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6785",
"merged_at": "2024-04-08T12:35:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5r3dCw | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6785). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6785/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6785 | https://github.com/huggingface/datasets/pull/6785 | true |
2,228,390,504 | https://api.github.com/repos/huggingface/datasets/issues/6784/labels{/name} | Instead of waiting for data files to be extracted in the packaged builders, we can prepend the compression prefix and extract them as they are being read (using `fsspec`). This saves disk space (deleting extracted archives is not set by default) and slightly speeds up dataset generation (less disk reads) | 2024-04-08T23:33:24Z | 6,784 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2024-04-05T16:12:25Z | https://api.github.com/repos/huggingface/datasets/issues/6784/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6784/timeline | Extract data on the fly in packaged builders | https://api.github.com/repos/huggingface/datasets/issues/6784/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6784.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6784",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6784.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6784"
} | PR_kwDODunzps5r3UTj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6784). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6784/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6784 | https://github.com/huggingface/datasets/pull/6784 | true |
2,228,179,466 | https://api.github.com/repos/huggingface/datasets/issues/6783/labels{/name} | ### Describe the bug
# problem
I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20.
## code for resampling
```
from datasets import load_dataset, Audio
from transformers import AutoFeatureExtractor
from transformers imp... | 2024-04-08T16:11:01Z | 6,783 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-05T14:31:48Z | https://api.github.com/repos/huggingface/datasets/issues/6783/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6783/timeline | AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook | https://api.github.com/repos/huggingface/datasets/issues/6783/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26062262?v=4",
"events_url": "https://api.github.com/users/petrov826/events{/privacy}",
"followers_url": "https://api.github.com/users/petrov826/followers",
"following_url": "https://api.github.com/users/petrov826/following{/other_user}",
"gists_url": "... | [] | null | null | NONE | null | null | I_kwDODunzps6Ez1IK | [
"Hi! You can fix this by updating the `datasets` package with `pip install -U datasets` and restarting the notebook.\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6783/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6783 | https://github.com/huggingface/datasets/issues/6783 | false |
2,228,081,955 | https://api.github.com/repos/huggingface/datasets/issues/6782/labels{/name} | ### Describe the bug
Operations that save an image from a path into parquet are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is already slow but used o... | 2024-04-05T21:04:43Z | 6,782 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-05T13:46:54Z | https://api.github.com/repos/huggingface/datasets/issues/6782/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6782/timeline | Map/Saving Image from external filepath extremely slow | https://api.github.com/repos/huggingface/datasets/issues/6782/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https:... | [] | null | null | NONE | null | null | I_kwDODunzps6EzdUj | [
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6782/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6782 | https://github.com/huggingface/datasets/issues/6782 | false |
2,228,026,497 | https://api.github.com/repos/huggingface/datasets/issues/6781/labels{/name} | Inferring the type seems to be unnecessary given that the pyarrow array has already been created.
Because pyarrow array creation is sometimes extremely slow this doubles the time write_batch takes. | 2024-04-09T07:49:11Z | 6,781 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T13:21:05Z | https://api.github.com/repos/huggingface/datasets/issues/6781/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6781/timeline | Remove get_inferred_type from ArrowWriter write_batch | https://api.github.com/repos/huggingface/datasets/issues/6781/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https:... | [] | null | null | NONE | 2024-04-09T07:49:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6781",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6781"
} | PR_kwDODunzps5r2DMe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6781). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Close in favor of #6786."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6781/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6781 | https://github.com/huggingface/datasets/pull/6781 | true |
2,226,160,096 | https://api.github.com/repos/huggingface/datasets/issues/6780/labels{/name} | Updates the `wmt_t2t` test to pin the `revision` to the version with a loading script (cc @albertvillanova).
Additionally, it replaces the occurrences of the `lhoestq/test` repo id with `hf-internal-testing/dataset_with_script` and re-enables logging checks in the `Dataset.from_sql` tests. | 2024-04-04T18:46:04Z | 6,780 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-04T17:45:04Z | https://api.github.com/repos/huggingface/datasets/issues/6780/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6780/timeline | Fix CI | https://api.github.com/repos/huggingface/datasets/issues/6780/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2024-04-04T18:23:34Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6780",
"merged_at": "2024-04-04T18:23:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5rvkyj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6780). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6780/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6780 | https://github.com/huggingface/datasets/pull/6780 | true |
2,226,075,551 | https://api.github.com/repos/huggingface/datasets/issues/6779/labels{/name} | `diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here.
It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in th... | 2024-04-08T13:34:01Z | 6,779 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-04T17:02:51Z | https://api.github.com/repos/huggingface/datasets/issues/6779/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6779/timeline | Install dependencies with `uv` in CI | https://api.github.com/repos/huggingface/datasets/issues/6779/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2024-04-08T13:27:44Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6779",
"merged_at": "2024-04-08T13:27:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5rvSA8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6779). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6779/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6779 | https://github.com/huggingface/datasets/pull/6779 | true |
2,226,040,636 | https://api.github.com/repos/huggingface/datasets/issues/6778/labels{/name} | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there... | 2024-04-08T15:24:41Z | 6,778 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-04T16:46:13Z | https://api.github.com/repos/huggingface/datasets/issues/6778/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6778/timeline | Dataset.to_csv() missing commas in columns with lists | https://api.github.com/repos/huggingface/datasets/issues/6778/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/100041276?v=4",
"events_url": "https://api.github.com/users/mpickard-dataprof/events{/privacy}",
"followers_url": "https://api.github.com/users/mpickard-dataprof/followers",
"following_url": "https://api.github.com/users/mpickard-dataprof/following{/other... | [] | null | null | NONE | null | null | I_kwDODunzps6Erq88 | [
"Hello!\r\n\r\nThis is due to how pandas write numpy arrays to csv. [Source](https://stackoverflow.com/questions/54753179/to-csv-saves-np-array-as-string-instead-of-as-a-list)\r\nTo fix this, you can convert them to list yourselves.\r\n\r\n```python\r\ndf = ds.to_pandas()\r\ndf['int'] = df['int'].apply(lambda arr: ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6778/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6778 | https://github.com/huggingface/datasets/issues/6778 | false |
2,224,611,247 | https://api.github.com/repos/huggingface/datasets/issues/6777/labels{/name} | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white... | 2024-04-05T21:14:48Z | 6,777 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-04T06:31:53Z | https://api.github.com/repos/huggingface/datasets/issues/6777/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6777/timeline | .Jsonl metadata not detected | https://api.github.com/repos/huggingface/datasets/issues/6777/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/81643693?v=4",
"events_url": "https://api.github.com/users/nighting0le01/events{/privacy}",
"followers_url": "https://api.github.com/users/nighting0le01/followers",
"following_url": "https://api.github.com/users/nighting0le01/following{/other_user}",
"g... | [] | null | null | NONE | null | null | I_kwDODunzps6EmN-v | [
"Hi! `metadata.jsonl` (or `metadata.csv`) is the only allowed name for the `imagefolder`'s metadata files.",
"@mariosasko hey i tried with metadata.jsonl also and it still doesn't get the right columns",
"@mariosasko it says metadata.csv not found\r\n<img width=\"1150\" alt=\"image\" src=\"https://github.com/hu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6777/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6777 | https://github.com/huggingface/datasets/issues/6777 | false |
2,223,457,792 | https://api.github.com/repos/huggingface/datasets/issues/6775/labels{/name} | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the exa... | 2024-04-08T01:24:35Z | 6,775 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-03T17:06:30Z | https://api.github.com/repos/huggingface/datasets/issues/6775/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6775/timeline | IndexError: Invalid key: 0 is out of bounds for size 0 | https://api.github.com/repos/huggingface/datasets/issues/6775/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/38481564?v=4",
"events_url": "https://api.github.com/users/kk2491/events{/privacy}",
"followers_url": "https://api.github.com/users/kk2491/followers",
"following_url": "https://api.github.com/users/kk2491/following{/other_user}",
"gists_url": "https://a... | [] | null | null | NONE | null | null | I_kwDODunzps6Eh0YA | [
"Same problem.",
"Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in https://github.com/huggingface/peft/issues/1299.\r\n\r\n(I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the do... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6775/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6775 | https://github.com/huggingface/datasets/issues/6775 | false |
2,222,164,316 | https://api.github.com/repos/huggingface/datasets/issues/6774/labels{/name} | ### Describe the bug
When I create a dataset, it gets stuck while generating cached data.
The image format is PNG, and it will not get stuck when the image format is jpeg.

After debugging, I know that it is b... | 2024-04-03T07:47:31Z | 6,774 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-03T07:47:31Z | https://api.github.com/repos/huggingface/datasets/issues/6774/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6774/timeline | Generating split is very slow when Image format is PNG | https://api.github.com/repos/huggingface/datasets/issues/6774/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22740819?v=4",
"events_url": "https://api.github.com/users/Tramac/events{/privacy}",
"followers_url": "https://api.github.com/users/Tramac/followers",
"following_url": "https://api.github.com/users/Tramac/following{/other_user}",
"gists_url": "https://a... | [] | null | null | NONE | null | null | I_kwDODunzps6Ec4lc | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6774/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6774 | https://github.com/huggingface/datasets/issues/6774 | false |
2,221,049,121 | https://api.github.com/repos/huggingface/datasets/issues/6773/labels{/name} | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whene... | 2024-04-08T18:43:45Z | 6,773 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-02T17:23:22Z | https://api.github.com/repos/huggingface/datasets/issues/6773/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6773/timeline | Dataset on Hub re-downloads every time? | https://api.github.com/repos/huggingface/datasets/issues/6773/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4",
"events_url": "https://api.github.com/users/manestay/events{/privacy}",
"followers_url": "https://api.github.com/users/manestay/followers",
"following_url": "https://api.github.com/users/manestay/following{/other_user}",
"gists_url": "http... | [] | null | completed | NONE | 2024-04-08T18:43:45Z | null | I_kwDODunzps6EYoUh | [
"The caching works as expected when I try to reproduce this locally or on Colab...",
"hi @mariosasko , Thank you for checking. I also tried running this again just now, and it seems like the `load_dataset()` caches properly (though I'll double check later).\r\n\r\nI think the issue might be in the caching of the ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6773/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6773 | https://github.com/huggingface/datasets/issues/6773 | false |
2,220,851,533 | https://api.github.com/repos/huggingface/datasets/issues/6772/labels{/name} | Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls.
Reported in https://github.com/huggingface/datasets/issues/6700 | 2024-04-02T16:28:45Z | 6,772 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-02T15:41:28Z | https://api.github.com/repos/huggingface/datasets/issues/6772/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6772/timeline | `remove_columns`/`rename_columns` doc fixes | https://api.github.com/repos/huggingface/datasets/issues/6772/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2024-04-02T16:17:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6772",
"merged_at": "2024-04-02T16:17:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5rdKZ2 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6772). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6772/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6772 | https://github.com/huggingface/datasets/pull/6772 | true |
2,220,131,457 | https://api.github.com/repos/huggingface/datasets/issues/6771/labels{/name} | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loa... | 2024-04-04T14:22:03Z | 6,771 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-02T10:24:57Z | https://api.github.com/repos/huggingface/datasets/issues/6771/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6771/timeline | Datasets FileNotFoundError when trying to generate examples. | https://api.github.com/repos/huggingface/datasets/issues/6771/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26197115?v=4",
"events_url": "https://api.github.com/users/RitchieP/events{/privacy}",
"followers_url": "https://api.github.com/users/RitchieP/followers",
"following_url": "https://api.github.com/users/RitchieP/following{/other_user}",
"gists_url": "htt... | [] | null | completed | NONE | 2024-04-04T14:22:03Z | null | I_kwDODunzps6EVISB | [
"Hi! I've opened a PR in the repo to fix this issue: https://huggingface.co/datasets/RitchieP/VerbaLex_voice/discussions/6",
"@mariosasko Thanks for the PR and help! Guess I could close the issue for now. Appreciate the help!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6771/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6771 | https://github.com/huggingface/datasets/issues/6771 | false |
2,218,991,883 | https://api.github.com/repos/huggingface/datasets/issues/6770/labels{/name} | ### Describe the bug
`Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`.
I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly.
### Steps to reproduce the bug
To reproduce the bug:
1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`.
2. Run the following ... | 2024-04-03T13:42:29Z | 6,770 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-01T20:17:48Z | https://api.github.com/repos/huggingface/datasets/issues/6770/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6770/timeline | [Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2` | https://api.github.com/repos/huggingface/datasets/issues/6770/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/19348888?v=4",
"events_url": "https://api.github.com/users/fshp971/events{/privacy}",
"followers_url": "https://api.github.com/users/fshp971/followers",
"following_url": "https://api.github.com/users/fshp971/following{/other_user}",
"gists_url": "https:... | [] | null | null | NONE | null | null | I_kwDODunzps6EQyEL | [
"You should be able to fix this by updating `huggingface_hub` with `pip install -U huggingface_hub`. We use this package under the hood to resolve the Hub's files."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6770/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6770 | https://github.com/huggingface/datasets/issues/6770 | false |
2,218,242,015 | https://api.github.com/repos/huggingface/datasets/issues/6769/labels{/name} | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives... | 2024-04-01T13:36:58Z | 6,769 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-04-01T13:18:47Z | https://api.github.com/repos/huggingface/datasets/issues/6769/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6769/timeline | (Willing to PR) Datasets with custom python objects | https://api.github.com/repos/huggingface/datasets/issues/6769/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https:/... | [] | null | null | NONE | null | null | I_kwDODunzps6EN6_f | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6769/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6769 | https://github.com/huggingface/datasets/issues/6769 | false |
2,217,065,412 | https://api.github.com/repos/huggingface/datasets/issues/6767/labels{/name} | Fixed the issue #6755 on the typo mistake | 2024-04-02T14:14:02Z | 6,767 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-31T16:13:37Z | https://api.github.com/repos/huggingface/datasets/issues/6767/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6767/timeline | fixing the issue 6755(small typo) | https://api.github.com/repos/huggingface/datasets/issues/6767/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2024-04-02T14:01:18Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6767",
"merged_at": "2024-04-02T14:01:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5rQO9J | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6767). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6767/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6767 | https://github.com/huggingface/datasets/pull/6767 | true |
2,215,933,515 | https://api.github.com/repos/huggingface/datasets/issues/6765/labels{/name} | ### Describe the bug
Here is the full error stack when installing:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you ... | 2024-04-03T14:33:12Z | 6,765 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-29T19:57:24Z | https://api.github.com/repos/huggingface/datasets/issues/6765/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6765/timeline | Compatibility issue between s3fs, fsspec, and datasets | https://api.github.com/repos/huggingface/datasets/issues/6765/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https:... | [] | null | completed | NONE | 2024-04-03T14:33:12Z | null | I_kwDODunzps6EFHZL | [
"Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.",
"> Hi! Instead of running `pip install` separately for each package, you should pass all th... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6765/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6765 | https://github.com/huggingface/datasets/issues/6765 | false |
2,215,767,119 | https://api.github.com/repos/huggingface/datasets/issues/6764/labels{/name} | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metad... | 2024-03-29T17:52:27Z | 6,764 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-29T17:49:28Z | https://api.github.com/repos/huggingface/datasets/issues/6764/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6764/timeline | load_dataset can't work with symbolic links | https://api.github.com/repos/huggingface/datasets/issues/6764/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13640533?v=4",
"events_url": "https://api.github.com/users/VladimirVincan/events{/privacy}",
"followers_url": "https://api.github.com/users/VladimirVincan/followers",
"following_url": "https://api.github.com/users/VladimirVincan/following{/other_user}",
... | [] | null | null | NONE | null | null | I_kwDODunzps6EEexP | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6764/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6764 | https://github.com/huggingface/datasets/issues/6764 | false |
2,213,440,804 | https://api.github.com/repos/huggingface/datasets/issues/6763/labels{/name} | When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters.
However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This di... | 2024-03-28T15:51:46Z | 6,763 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T14:52:35Z | https://api.github.com/repos/huggingface/datasets/issues/6763/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6763/timeline | Fix issue with case sensitivity when loading dataset from local cache | https://api.github.com/repos/huggingface/datasets/issues/6763/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/58537872?v=4",
"events_url": "https://api.github.com/users/Sumsky21/events{/privacy}",
"followers_url": "https://api.github.com/users/Sumsky21/followers",
"following_url": "https://api.github.com/users/Sumsky21/following{/other_user}",
"gists_url": "htt... | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6763",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6763"
} | PR_kwDODunzps5rENat | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6763/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6763 | https://github.com/huggingface/datasets/pull/6763 | true |
2,213,275,468 | https://api.github.com/repos/huggingface/datasets/issues/6762/labels{/name} | I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable. | 2024-03-29T15:44:02Z | 6,762 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T13:40:28Z | https://api.github.com/repos/huggingface/datasets/issues/6762/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6762/timeline | Allow polars as valid output type | https://api.github.com/repos/huggingface/datasets/issues/6762/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "htt... | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6762.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6762",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6762.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6762"
} | PR_kwDODunzps5rDpBe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6762). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6762/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6762 | https://github.com/huggingface/datasets/pull/6762 | true |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6