html_url
stringlengths
48
51
title
stringlengths
5
155
comments
stringlengths
63
15.7k
body
stringlengths
0
17.7k
comment_length
int64
16
949
text
stringlengths
164
23.7k
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
@lhoestq, we have two Pull Requests to implement: - Dataset.add_item: #1870 - Dataset.add_column: #2145 which add a single row or column, repectively. The request here is to implement the concatenation of *multiple* rows/columns. Am I right? We should agree on the API: - `concatenate_datasets` with `axis`? -...
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
51
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) @lhoestq, we have two Pull Requests to implement: - Dataset.add_item: #1870 - Dataset.add_column: #2145 which add a s...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
For the API, I like `concatenate_datasets` with `axis` personally :) From a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (a...
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
158
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) For the API, I like `concatenate_datasets` with `axis` personally :) From a list of `Dataset` objects, it would concate...
https://github.com/huggingface/datasets/issues/849
Load amazon dataset
Thanks for reporting ! We plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls. Also I think the bullet points formatting has been fixed
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amaz...
34
Load amazon dataset Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset datase...
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. The indices mapping correspond to a mapping on top of the data table that is used...
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
172
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported Val...
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. > > The indices mapping correspond to a mapping on top of the data table that i...
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
184
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported Val...
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
31
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported Val...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
It looks like an issue with wandb/tqdm here. We're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility. Could you make a minimal script to reproduce or a google colab ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
46
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
hi facing the same issue here - `AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", l...
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
293
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
It looks like this warning : "Truncation was not explicitly activated but max_length is provided a specific value, " is not handled well by wandb. The error occurs when calling the tokenizer. Maybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ? Otherwise I don't know...
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
80
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
I'm having a similar issue but when I try to do multiprocessing with the `DataLoader` Code to reproduce: ``` from datasets import load_dataset book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]') book_corpus = book_corpus.map(encode, batched=True...
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
383
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
29
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Yep this time this is a warning from pytorch that causes wandb to not work properly. Could this by a wandb issue ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
23
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Hi @timothyjlaurent @gaceladri If you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ? This issue might be related to https://github.com/huggingface/transformers/issues/9623
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
30
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well.
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
45
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
https://github.com/huggingface/datasets/issues/846
Add HoVer multi-hop fact verification dataset
Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction...
23
Add HoVer multi-hop fact verification dataset ## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** Ther...
https://github.com/huggingface/datasets/issues/846
Add HoVer multi-hop fact verification dataset
Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction...
39
Add HoVer multi-hop fact verification dataset ## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** Ther...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Thanks for reporting ! That's a bug indeed If you want to contribute, feel free to fix this issue and open a PR :)
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
24
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem.
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
27
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Hello everyone, I think the problem is not solved: ``` from datasets import load_metric metric=load_metric('bertscore') metric.compute( predictions=predictions, references=references, lang='fr', rescale_with_baseline=True ) TypeError: get_hash() missing 2 required positional arguments: ...
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
42
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :) In the meantime please use an older version of `bert_score`
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
30
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Right now multiprocessing only runs on single node. However it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on...
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ...
76
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel...
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Curious to hear if anything on that side changed or if you suggestions to do it changed @lhoestq :) For our use-case, we are entering the regime where trading a few more instances to save a few days would be nice :)
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ...
42
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel...
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Currently for multi-node setups we're mostly going towards a nice integration with Dask. But I wouldn't exclude exploring `pathos` more at one point
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ...
23
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel...
https://github.com/huggingface/datasets/issues/841
Can not reuse datasets already downloaded
It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py' Where and how to assign this ```wikipedia.py``` after I manually download it ?
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but...
19
Can not reuse datasets already downloaded Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
18
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Thanks for the fast response. I have the latest version '2.0.0' (I tried to update) I am working with Python 3.8.5
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
21
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612 The problem is in arrow when the column data contains long strings. Any ideas on how to bypass this?
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
29
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). In the meantime you can specify yourself the ...
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
56
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
This did help to load the data. But the problem now is that I get: ArrowInvalid: CSV parse error: Expected 5 columns, got 187 It seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow But I got a similar error, again it loaded fine in pandas so I am no...
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
66
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
32
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
> We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). > > In the meantime you can specify yourself ...
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
82
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pan...
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
44
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
https://github.com/huggingface/datasets/issues/835
Wikipedia postprocessing
Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect. As an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir...
38
Wikipedia postprocessing Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, ge...
https://github.com/huggingface/datasets/issues/834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** h...
48
[GEM] add WikiLingua cross-lingual abstractive summarization dataset ## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images th...
https://github.com/huggingface/datasets/issues/834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem) You can use it for example to load the French to English translation with: ```python from datasets import load_dataset wikilingua = load_dataset("gem", "wiki_lingua_french_fr") ``` Clo...
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** h...
42
[GEM] add WikiLingua cross-lingual abstractive summarization dataset ## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images th...
https://github.com/huggingface/datasets/issues/827
[GEM] MultiWOZ dialogue dataset
Hi @yjernite can I help in adding this dataset? I am excited about this because this will be my first contribution to the datasets library as well as to hugginface.
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user...
30
[GEM] MultiWOZ dialogue dataset ## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – ther...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going ...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
72
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine. @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could yo...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
57
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
here is my way to load a dataset offline, but it **requires** an online machine 1. (online machine) ``` import datasets data = datasets.load_dataset(...) data.save_to_disk(/YOUR/DATASET/DIR) ``` 2. copy the dir from online to the offline machine 3. (offline machine) ``` import datasets data = datasets.load_f...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
47
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
> here is my way to load a dataset offline, but it **requires** an online machine > > 1. (online machine) > > ``` > > import datasets > > data = datasets.load_dataset(...) > > data.save_to_disk(/YOUR/DATASET/DIR) > > ``` > > 2. copy the dir from online to the offline machine > > 3. (offline machine) > > ``` > ...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
76
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
I opened a PR that allows to reload modules that have already been loaded once even if there's no internet. Let me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :) I already note the "freeze" modules option, to prevent local modules updates. It would be a ...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
179
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :) You can now use them offline ```python datasets = load_dataset('text', data_files=data_files) ``` We'll do a new release soon
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
38
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
https://github.com/huggingface/datasets/issues/823
how processing in batch works in datasets
Hi I don’t think this is a request for a dataset like you labeled it. I also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented ...
53
how processing in batch works in datasets Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented ...
https://github.com/huggingface/datasets/issues/823
how processing in batch works in datasets
Hi Thomas, what I do not get from documentation is that why when you set batched=True, this is processed in batch, while data is not divided to batched beforehand, basically this is a question on the documentation and I do not get the batched=True, but sure, if you think this is more appropriate in forum I will post it...
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented ...
167
how processing in batch works in datasets Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented ...
https://github.com/huggingface/datasets/issues/823
how processing in batch works in datasets
Yes the forum is perfect for that. You can post in the `datasets` section. Thanks a lot!
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented ...
17
how processing in batch works in datasets Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented ...
https://github.com/huggingface/datasets/issues/822
datasets freezes
Pytorch is unable to convert strings to tensors unfortunately. You can use `set_format(type="torch")` on columns that can be converted to tensors, such as token ids. This makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_datase...
52
datasets freezes Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) da...
https://github.com/huggingface/datasets/issues/822
datasets freezes
Ultimately, we decided to return a list instead of an error when formatting a string column with the format type `"torch"`. If you think an error would be more appropriate, please open a new issue.
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_datase...
35
datasets freezes Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) da...
https://github.com/huggingface/datasets/issues/816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
To show the issue: ``` python -c "from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))" ``` doesn't always return the same ouput since `globs` is a dictionary with "a" and "len" as keys but sometimes not in the same order
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementati...
43
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues. Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not d...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hello ! Could you give more details ? If you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use ```python for example in dataset: # do something ``` If you want to iter through several datasets you can first concatenate them ```python from data...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
67
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hello ! Could you give more details ? If you mean iter through one dat...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hi Huggingface/Datasets team, I want to use the datasets inside Seq2SeqDataset here https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py and there I need to return back each line from the datasets and I am not sure how to access each line and implement this? It seems it also has get_item at...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
185
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hi Huggingface/Datasets team, I want to use the datasets inside Seq2SeqDat...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
could you tell me please if datasets also has __getitem__ any idea on how to integrate it with Seq2SeqDataset is appreciated thanks On Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com> wrote: > Hi Huggingface/Datasets team, > I want to use the datasets inside Seq2SeqDataset here > https://github...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
236
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks could you tell me please if datasets also has __getitem__ any idea on how ...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column. We've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files. However as soon as you have a `datasets.Dataset` with columns ...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
76
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks `datasets.Dataset` objects implement indeed `__getitem__`. It returns a di...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hi I am sorry for asking it multiple times but I am not getting the dataloader type, could you confirm if the dataset library returns back an iterable type dataloader or a mapping type one where one has access to __getitem__, in the former case, one can iterate with __iter__, and how I can configure it to return the da...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
217
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hi I am sorry for asking it multiple times but I am not getting the datalo...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__` For example you can do ```python for example in dataset: # do something ``` or ```python for i in range(len(dataset)): example = dataset[i] # do something ``` When you do that, one and only ...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
57
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks `datasets.Dataset` objects are both iterative and mapping types: it has bo...
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hi there, Here is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library...
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
113
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hi there, Here is what I am trying, this is not working for me in map-st...
https://github.com/huggingface/datasets/issues/813
How to implement DistributedSampler with datasets
Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks.
Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d...
40
How to implement DistributedSampler with datasets Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me ho...
https://github.com/huggingface/datasets/issues/812
Too much logging
Hi ! Thanks for reporting :) I agree these one should be hidden when the logging level is warning, we'll fix that
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1...
22
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat...
https://github.com/huggingface/datasets/issues/812
Too much logging
+1, the amount of logging is excessive. Most of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`) ``` I1109 21:26:01.742688 139785006901056 file...
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1...
145
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat...
https://github.com/huggingface/datasets/issues/812
Too much logging
In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default. Also `set_verbosity_warning` does take into account these logs now. Can you try to update the lib ? ``` pip install --upgrade datasets ```
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1...
46
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat...
https://github.com/huggingface/datasets/issues/812
Too much logging
Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick? I'm still using 1.13 version datasets.
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1...
28
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat...
https://github.com/huggingface/datasets/issues/809
Add Google Taskmaster dataset
Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation...
27
Add Google Taskmaster dataset ## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-dat...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
Hi ! The url works on my side. Is the url working in your navigator ? Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
30
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> Hi ! > The url works on my side. > > Is the url working in your navigator ? > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? I tried another server, it's working now. Thanks a lot. And I'm curious about why download things from "github" when I load dataset f...
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
69
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> > Hi ! > > The url works on my side. > > Is the url working in your navigator ? > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > I tried another server, it's working now. Thanks a lot. > > And I'm curious about why download things from "github" whe...
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
103
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
hello, how did you solve this problems? > > > Hi ! > > > The url works on my side. > > > Is the url working in your navigator ? > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > I tried another server, it's working now. Thanks a lot. > > And I'...
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
136
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> hello, how did you solve this problems? > > > > > Hi ! > > > > The url works on my side. > > > > Is the url working in your navigator ? > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > I tried another server, it's working now. Thanks ...
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
155
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> > hello, how did you solve this problems? > > > > > Hi ! > > > > > The url works on my side. > > > > > Is the url working in your navigator ? > > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > > > > I tried another server, it's working ...
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
174
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> > > > hello, how did you solve this problems? > > > > > Hi ! > > > > > The url works on my side. > > > > > Is the url working in your navigator ? > > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > > > > I tried another server, it's ...
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
316
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
I also experienced this issue this morning. Looks like something specific to windows. I'm working on a fix
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
18
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).res...
https://github.com/huggingface/datasets/issues/806
Quail dataset urls are out of date
Hi ! Thanks for reporting. We should fix the urls and use quail 1.3. If you want to contribute feel free to fix the urls and open a PR :)
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.co...
30
Quail dataset urls are out of date <h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per ...
https://github.com/huggingface/datasets/issues/806
Quail dataset urls are out of date
Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820) Updated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to...
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.co...
24
Quail dataset urls are out of date <h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per ...
https://github.com/huggingface/datasets/issues/805
On loading a metric from datasets, I get the following error
Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object. Could you update pyarrow and try again ? ``` pip install --upgrade pyarrow ```
`from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you.
31
On loading a metric from datasets, I get the following error `from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any...
https://github.com/huggingface/datasets/issues/804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208) For the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here: https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README...
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tas...
32
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') # The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/READM...
https://github.com/huggingface/datasets/issues/804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
Oh ok, I guess I read the paper too fast 😅, thank you for your answer!
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tas...
16
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') # The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/READM...
https://github.com/huggingface/datasets/issues/801
How to join two datasets?
Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is...
24
How to join two datasets? Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` ...
https://github.com/huggingface/datasets/issues/801
How to join two datasets?
Closing this one. Feel free to re-open if you have other questions about this issue. Also linking another discussion about joining datasets: #853
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is...
23
How to join two datasets? Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` ...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi ! Indeed there's an issue with those links. We should probably use the target urls of the redirections instead
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
20
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi, the same issue here, could you tell me how to download it through datasets? thanks
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
16
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Actually it's already fixed on the master branch since #740 I'll do the 1.1.3 release soon
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
16
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi thanks, but I did tried to install from the pip install git+... and it does not work for me,. thanks for the help. I have the same issue with wmt16, "ro-en" thanks. Best Rabeeh On Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com> wrote: > Actually it's already fixed on the master branch since...
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
98
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
I just tested on google colab using ```python !pip install git+https://github.com/huggingface/datasets.git from datasets import load_dataset load_dataset("trec") ``` and it works. Can you detail how you got the issue even when using the latest version on master ? Also about wmt we'll look into it, thanks for ...
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
48
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
I think the new url with .edu is also broken: ``` ConnectionError: Couldn't reach https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label ``` Cant download the dataset anymore.
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
21
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi ! The URL seems to work fine on my side, can you try again ?
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
16
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Forgot to update, i wrote an email to the webmaster of seas.upenn.edu because i couldnt reach the url on any machine. This was the answer: ``` Thank you for your report. The server was offline for maintenance and is now available again. ``` Guess all back to normal now 🙂
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
50
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/Q...
https://github.com/huggingface/datasets/issues/792
KILT dataset: empty string in triviaqa input field
Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md (Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))
# What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/Pa...
21
KILT dataset: empty string in triviaqa input field # What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` versi...
https://github.com/huggingface/datasets/issues/790
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e "....
18
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtuale...
https://github.com/huggingface/datasets/issues/786
feat(dataset): multiprocessing _generate_examples
I agree that would be cool :) Right now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case...
46
feat(dataset): multiprocessing _generate_examples forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and ...
https://github.com/huggingface/datasets/issues/786
feat(dataset): multiprocessing _generate_examples
`_generate_examples` can now be run in parallel thanks to https://github.com/huggingface/datasets/pull/5107. You can find more info [here](https://huggingface.co/docs/datasets/dataset_script#sharding).
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case...
16
feat(dataset): multiprocessing _generate_examples forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and ...
https://github.com/huggingface/datasets/issues/784
Issue with downloading Wikipedia data for low resource language
Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these tw...
21
Issue with downloading Wikipedia data for low resource language Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runne...
https://github.com/huggingface/datasets/issues/784
Issue with downloading Wikipedia data for low resource language
@lhoestq I've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya. Also, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message. ``` ValueError: B...
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these tw...
342
Issue with downloading Wikipedia data for low resource language Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runne...
https://github.com/huggingface/datasets/issues/784
Issue with downloading Wikipedia data for low resource language
For posterity, here's how I got the data I needed: I needed Bengali, so I had to check which dumps are available here: https://dumps.wikimedia.org/bnwiki/ , then I ran: ``` load_dataset("wikipedia", language="bn", date="20211101", beam_runner="DirectRunner") ```
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these tw...
34
Issue with downloading Wikipedia data for low resource language Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runne...
https://github.com/huggingface/datasets/issues/778
Unexpected behavior when loading cached csv file?
Hi ! Thanks for reporting. The same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 . The fix will be available in the next release :)
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n...
36
Unexpected behavior when loading cached csv file? I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download...
https://github.com/huggingface/datasets/issues/778
Unexpected behavior when loading cached csv file?
Thanks for the prompt reply and terribly sorry for the spam! Looking forward to the new release!
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n...
17
Unexpected behavior when loading cached csv file? I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download...
https://github.com/huggingface/datasets/issues/773
Adding CC-100: Monolingual Datasets from Web Crawl Data
These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue? @abhishekkrthakur @yjernite
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling da...
24
Adding CC-100: Monolingual Datasets from Web Crawl Data ## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Moti...
https://github.com/huggingface/datasets/issues/773
Adding CC-100: Monolingual Datasets from Web Crawl Data
Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling da...
21
Adding CC-100: Monolingual Datasets from Web Crawl Data ## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Moti...
https://github.com/huggingface/datasets/issues/771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset. At one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
56
Using `Dataset.map` with `n_proc>1` print multiple progress bars When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. Yes it allows to monitor the speed of each process. Currently each process ...
https://github.com/huggingface/datasets/issues/771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
Hi @lhoestq, I am facing a similar issue, it is annoying when lots of progress bars are printed. Is there a way to turn off this behavior?
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
27
Using `Dataset.map` with `n_proc>1` print multiple progress bars When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. Hi @lhoestq, I am facing a similar issue, it is annoying when lots of progr...
https://github.com/huggingface/datasets/issues/769
How to choose proper download_mode in function load_dataset?
`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work. This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so hones...
17
How to choose proper download_mode in function load_dataset? Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",...
https://github.com/huggingface/datasets/issues/769
How to choose proper download_mode in function load_dataset?
Indeed you should use `features` in this case. ```python features = Features({'text': Value('string'), 'label': Value('float32')}) dataset = load_dataset('csv', data_files=['sst_test.csv'], features=features) ``` Note that because of an issue with the caching when you change the features (see #750 ) you still nee...
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so hones...
55
How to choose proper download_mode in function load_dataset? Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",...
https://github.com/huggingface/datasets/issues/769
How to choose proper download_mode in function load_dataset?
https://github.com/huggingface/datasets/issues/769#issuecomment-717837832 > This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing @lhoestq do you still think we should rename it?
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so hones...
25
How to choose proper download_mode in function load_dataset? Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",...