html_url
stringlengths
48
51
title
stringlengths
5
155
comments
stringlengths
63
15.7k
body
stringlengths
0
17.7k
comment_length
int64
16
949
text
stringlengths
164
23.7k
https://github.com/huggingface/datasets/issues/532
File exists error when used with TPU
Here's a minimal working example to reproduce this issue. Assumption: - You have access to TPU. - You have installed `transformers` and `nlp`. - You have tokenizer files (`config.json`, `merges.txt`, `vocab.json`) under the directory named `model_name`. - You have `xla_spawn.py` (Download from https://github.com...
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/...
482
File exists error when used with TPU Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_lan...
https://github.com/huggingface/datasets/issues/532
File exists error when used with TPU
I ended up specifying the `cache_file_name` argument when I call `map` function. ```python dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True, cache_file_name=cache_file_name) ``` ...
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/...
59
File exists error when used with TPU Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_lan...
https://github.com/huggingface/datasets/issues/525
wmt download speed example
Thanks for creating the issue :) The download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url. Is this mirror official ? Also it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool Also cc @patric...
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
59
wmt download speed example Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-e...
https://github.com/huggingface/datasets/issues/525
wmt download speed example
Shall we host the files ourselves or it is fine to use this mirror in your opinion ?
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
18
wmt download speed example Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-e...
https://github.com/huggingface/datasets/issues/525
wmt download speed example
Should we add an argument in `load_dataset` to override some URL with a custom URL (e.g. mirror) or a local path? This could also be used to provide local files instead of the original files as requested by some users (e.g. when you made a dataset with the same format than SQuAD and what to use it instead of the off...
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
63
wmt download speed example Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-e...
https://github.com/huggingface/datasets/issues/525
wmt download speed example
@lhoestq I think we should host it ourselves. I'll put the subset of wmt (without preprocessed files) that we need on s3 and post a link over the weekend.
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
29
wmt download speed example Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-e...
https://github.com/huggingface/datasets/issues/525
wmt download speed example
Is there a solution yet? The download speed is still too slow. 60-70kbps download for wmt16 and around 100kbps for wmt19. @sshleifer
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
22
wmt download speed example Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-e...
https://github.com/huggingface/datasets/issues/519
[BUG] Metrics throwing new error on master since 0.4.0
Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) ...
18
[BUG] Metrics throwing new error on master since 0.4.0 The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add...
https://github.com/huggingface/datasets/issues/519
[BUG] Metrics throwing new error on master since 0.4.0
Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) ...
16
[BUG] Metrics throwing new error on master since 0.4.0 The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add...
https://github.com/huggingface/datasets/issues/517
add MLDoc dataset
This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies.
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages...
16
add MLDoc dataset Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories i...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
25
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
Maybe I'm a bit tired but I fail to see the issue here. Since `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
30
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. Thus, `select()` is called with `keep_in_memo...
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
131
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
16
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
My bad. This is actually not fixed in #513. Sorry about that... The new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. The buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my ...
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
76
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
17
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/514
dataset.shuffle(keep_in_memory=True) is never allowed
These are the steps needed to fix this issue: 1. add the following check to `Dataset.shuffle`: ```python if keep_in_memory and indices_cache_file_name is not None: raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.") ``` 2. set `indices_cache_file_name` to `None` i...
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
55
dataset.shuffle(keep_in_memory=True) is never allowed As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory o...
https://github.com/huggingface/datasets/issues/511
dataset.shuffle() and select() resets format. Intended?
Hi @vegarab yes feel free to open a discussion here. This design choice was not very much thought about. Since `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos). ...
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later...
164
dataset.shuffle() and select() resets format. Intended? Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the process...
https://github.com/huggingface/datasets/issues/511
dataset.shuffle() and select() resets format. Intended?
I think it's ok to keep the format. If we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed.
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later...
32
dataset.shuffle() and select() resets format. Intended? Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the process...
https://github.com/huggingface/datasets/issues/511
dataset.shuffle() and select() resets format. Intended?
Since datasets 1.0.0 the format is not reset anymore. Closing this one, but feel free to re-open if you have other questions
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later...
22
dataset.shuffle() and select() resets format. Intended? Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the process...
https://github.com/huggingface/datasets/issues/509
Converting TensorFlow dataset example
Do you want to convert a dataset script to the tfds format ? If so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp. I think it shouldn't be too hard to do the changes in reverse (at some manual adjustments). If you manage to make it work in reve...
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
73
Converting TensorFlow dataset example Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks Do you want to convert a dataset script to the tfds format ? If so, we currently have a comversion script nlp/commands/conv...
https://github.com/huggingface/datasets/issues/508
TypeError: Receiver() takes no arguments
Which version of Apache Beam do you have (can you copy your full environment info here)?
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
16
TypeError: Receiver() takes no arguments I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p...
https://github.com/huggingface/datasets/issues/508
TypeError: Receiver() takes no arguments
apache-beam==2.23.0 nlp==0.4.0 For me this was resolved by running the same python script on Linux (or really WSL).
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
18
TypeError: Receiver() takes no arguments I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p...
https://github.com/huggingface/datasets/issues/508
TypeError: Receiver() takes no arguments
Do you manage to run a dummy beam pipeline with python on windows ? You can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py) If you get the same error, it means that the issue comes from apache beam. Otherwise we'll investigat...
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
45
TypeError: Receiver() takes no arguments I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p...
https://github.com/huggingface/datasets/issues/508
TypeError: Receiver() takes no arguments
Still, same error, so I guess it is on apache beam then. Thanks for the investigation.
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
16
TypeError: Receiver() takes no arguments I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p...
https://github.com/huggingface/datasets/issues/508
TypeError: Receiver() takes no arguments
Thanks for trying Let us know if you find clues of what caused this issue, or if you find a fix
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
21
TypeError: Receiver() takes no arguments I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p...
https://github.com/huggingface/datasets/issues/507
Errors when I use
Looks like an issue with 3.0.2 transformers version. Works fine when I use "master" version of transformers.
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoToke...
17
Errors when I use I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization...
https://github.com/huggingface/datasets/issues/501
Caching doesn't work for map (non-deterministic)
Thanks for reporting ! To store the cache file, we compute a hash of the function given in `.map`, using our own hashing function. The hash doesn't seem to stay the same over sessions for the tokenizer. Apparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function. I'm...
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def conv...
59
Caching doesn't work for map (non-deterministic) The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.Au...
https://github.com/huggingface/datasets/issues/501
Caching doesn't work for map (non-deterministic)
Hi. I believe the fix was for the nlp library. Is there a solution to handle compiled regex expressions in .map() with the caching. I want to run a simple regex pattern on a big dataset, but I am running into the issue of compiled expression not being cached. Instead of opening a new issue, I thought I would put my...
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def conv...
75
Caching doesn't work for map (non-deterministic) The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.Au...
https://github.com/huggingface/datasets/issues/501
Caching doesn't work for map (non-deterministic)
Hi @MaveriQ! This fix is also included in the `datasets` library. Can you provide a reproducer?
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def conv...
16
Caching doesn't work for map (non-deterministic) The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.Au...
https://github.com/huggingface/datasets/issues/492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema. Could you try to update `nlp` ? Also, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
35
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("titl...
https://github.com/huggingface/datasets/issues/492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas. Since `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There...
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
55
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("titl...
https://github.com/huggingface/datasets/issues/492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
I'm doing a refactor of type inference in #363 . Both text fields should match after that
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
17
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("titl...
https://github.com/huggingface/datasets/issues/492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
It should be good now. I was able to run ```python >>> from nlp import concatenate_datasets, load_dataset >>> >>> bookcorpus = load_dataset("bookcorpus", split="train") >>> wiki = load_dataset("wikipedia", "20200501.en", split="train") >>> wiki.remove_columns_("title") # only keep the text >>> >>> assert boo...
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
48
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("titl...
https://github.com/huggingface/datasets/issues/488
issues with downloading datasets for wmt16 and wmt19
I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with: ``` wget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00 wget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01 wget...
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no...
37
issues with downloading datasets for wmt16 and wmt19 I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save ...
https://github.com/huggingface/datasets/issues/488
issues with downloading datasets for wmt16 and wmt19
Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing. Fixed locally for summarization needs, by running: ``` pip install sacrebleu sacrebleu -t wmt19 -l ru-en --echo src > test.source sacrebleu -t wmt19 -l ru-en --echo ref > test.target ``` h/t @sshleifer
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no...
45
issues with downloading datasets for wmt16 and wmt19 I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save ...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`). Do you know if there exist some copies without this issue ? How would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Coul...
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
69
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
I'm afraid that I don't know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue. Going through the raw text in this version, it's apparent that NLTK's TreebankWordTokenizer was applied on it (I gave some examples in my original post), f...
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
146
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
Ok I get it, that would be very cool indeed What kinds of patterns the detokenizer can't retrieve ?
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
19
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text: ``` Dwayne "The Rock" Johnson ``` will result in: ``` Dwayne `` The Rock '' Johnson ``` where the left and right quotation marks are turned into distinct symbols. ...
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
244
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https://huggingface.co/datasets/bookcorpus Or does this preprocessing exactly match that of the papers?
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
37
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https://github.com/soskek/bookcorpus
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
27
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/486
Bookcorpus data contains pretokenized text
Yes actually the BookCorpus on hugginface is based on [this](https://github.com/soskek/bookcorpus/issues/24#issuecomment-643933352). And I kind of regret naming it as "BookCorpus" instead of something like "BookCorpusLike". But there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a ...
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
60
Bookcorpus data contains pretokenized text It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes...
https://github.com/huggingface/datasets/issues/482
Bugs : dataset.map() is frozen on ELI5
This comes from an overflow in pyarrow's array. It is stuck inside the loop that reduces the batch size to avoid the overflow. I'll take a look
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
27
Bugs : dataset.map() is frozen on ELI5 Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset....
https://github.com/huggingface/datasets/issues/482
Bugs : dataset.map() is frozen on ELI5
I created a PR to fix the issue. It was due to an overflow check that handled badly an empty list. You can try the changes by using ``` !pip install git+https://github.com/huggingface/nlp.git@fix-bad-type-in-overflow-check ``` Also I noticed that the first 1000 examples have an empty list in the `title_urls`...
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
147
Bugs : dataset.map() is frozen on ELI5 Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset....
https://github.com/huggingface/datasets/issues/482
Bugs : dataset.map() is frozen on ELI5
@lhoestq mapping the function `make_input_target` was passed by your fixing. However, there is another error in the final step of `valid_dataset.map(convert_to_features, batched=True)` `ArrowInvalid: Could not convert Thepiratebay.vg with type str: converting to null type` (The [same colab notebook above with ne...
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
94
Bugs : dataset.map() is frozen on ELI5 Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset....
https://github.com/huggingface/datasets/issues/482
Bugs : dataset.map() is frozen on ELI5
I got this issue too and fixed it by specifying `writer_batch_size=3_000` in `.map`. This is because Arrow didn't expect `Thepiratebay.vg` in `title_urls `, as all previous examples have empty lists in `title_urls `
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
33
Bugs : dataset.map() is frozen on ELI5 Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset....
https://github.com/huggingface/datasets/issues/482
Bugs : dataset.map() is frozen on ELI5
I'm getting a hanging `dataset.map()` when running a gradio app with `gradio` for auto-reloading instead of `python`
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
17
Bugs : dataset.map() is frozen on ELI5 Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset....
https://github.com/huggingface/datasets/issues/482
Bugs : dataset.map() is frozen on ELI5
Maybe this is an issue with gradio, could you open an issue on their repo ? `Dataset.map` simply uses `multiprocess.Pool` for multiprocessing If you interrupt the program mayeb the stack trace would give some information of where it was hanging in the code (maybe a lock somewhere ?)
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
48
Bugs : dataset.map() is frozen on ELI5 Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset....
https://github.com/huggingface/datasets/issues/478
Export TFRecord to GCP bucket
Nevermind, I restarted my python session and it worked fine... --- I had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord')` works fine, but `dataset....
39
Export TFRecord to GCP bucket Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord...
https://github.com/huggingface/datasets/issues/477
Overview.ipynb throws exceptions with nlp 0.4.0
Thanks for reporting this issue There was a bug where numpy arrays would get returned instead of tensorflow tensors. This is fixed on master. I tried to re-run the colab and encountered this error instead: ``` AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor' ...
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: trai...
83
Overview.ipynb throws exceptions with nlp 0.4.0 with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-4890...
https://github.com/huggingface/datasets/issues/477
Overview.ipynb throws exceptions with nlp 0.4.0
Hi, I got another error (on Colab): ```python # You can read a few attributes of the datasets before loading them (they are python dataclasses) from dataclasses import asdict for key, value in asdict(datasets[6]).items(): print('👉 ' + key + ': ' + str(value)) -------------------------------------------...
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: trai...
110
Overview.ipynb throws exceptions with nlp 0.4.0 with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-4890...
https://github.com/huggingface/datasets/issues/474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing` As mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa...
43
test_load_real_dataset when config has BUILDER_CONFIGS that matter It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_datase...
https://github.com/huggingface/datasets/issues/474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
This was fixed in #527 Closing this one, but feel free to re-open if you have other questions
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa...
18
test_load_real_dataset when config has BUILDER_CONFIGS that matter It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_datase...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow) It can be done with `dataset.set_format("torch", columns=columns)` (or "tensorflow"). Note that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the list of column...
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
57
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
Hello . Yes, I did set the output format as below for the two columns `train_dataset.set_format('torch',columns=['Text','Label'])`
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
16
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
I think you're having this issue because you try to format strings as pytorch tensors, which is not possible. Indeed by having "Text" in `columns=['Text','Label']`, you try to convert the text values to pytorch tensors. Instead I recommend you to first tokenize your dataset using a tokenizer from transformers. For ...
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
133
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus. I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error ----...
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
115
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
> Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus. > I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error > ...
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
160
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
21
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
> I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ? i'm sorry that i met this issue in another place (not in huggingface repo).
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
36
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
@akhilkapil do you have strings in your dataset ? When you set the dataset format to "pytorch" you should exclude columns with strings as pytorch can't make tensors out of strings
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
31
invalid data type 'str' at _convert_outputs in arrow_dataset.py I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert...
https://github.com/huggingface/datasets/issues/468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
Indeed. Solution 1 is the simplest. This is actually a recurring problem. I think we should scan all the datasets with regexpr to fix the use of `open()` without encodings. And probably add a test in the CI to forbid using this in the future.
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-inp...
45
UnicodeDecodeError while loading PAN-X task of XTREME dataset Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError ...
https://github.com/huggingface/datasets/issues/468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
I've created a simple function that seems to do the trick: ```python def apply_encoding_on_file_open(filepath: str): """Apply UTF-8 encoding for all instances where a non-binary file is opened.""" with open(filepath, 'r', encoding='utf-8') as input_file: regexp = re.compile(r""" ...
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-inp...
200
UnicodeDecodeError while loading PAN-X task of XTREME dataset Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError ...
https://github.com/huggingface/datasets/issues/468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
I realised I was overthinking the problem, so decided to just run the regexp over the codebase and make the PR. In other words, we can ignore my comments about using the CLI 😸
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-inp...
34
UnicodeDecodeError while loading PAN-X task of XTREME dataset Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError ...
https://github.com/huggingface/datasets/issues/444
Keep loading old file even I specify a new file in load_dataset
This is the only fix I could come up with without touching the repo's code. ```python from nlp.builder import FORCE_REDOWNLOAD dataset = load_dataset('csv', data_file='./a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1') ``` You'll have to change the version each time you want to load a different csv file. ...
I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset seems to remain the old 'a.csv' and not loading new csv file. Even...
88
Keep loading old file even I specify a new file in load_dataset I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset see...
https://github.com/huggingface/datasets/issues/443
Cannot unpickle saved .pt dataset with torch.save()/load()
This seems to be fixed in a non-released version. Installing nlp from source ``` git clone https://github.com/huggingface/nlp cd nlp pip install . ``` solves the issue.
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype...
26
Cannot unpickle saved .pt dataset with torch.save()/load() Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text...
https://github.com/huggingface/datasets/issues/439
Issues: Adding a FAISS or Elastic Search index to a Dataset
`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`. Right now you can experiment with it by installing `transformers` from the master branch. You can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html). Moreove...
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t...
50
Issues: Adding a FAISS or Elastic Search index to a Dataset It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nl...
https://github.com/huggingface/datasets/issues/439
Issues: Adding a FAISS or Elastic Search index to a Dataset
@lhoestq I tried installing transformer from the master branch. Python imports for DPR again didnt' work. Anyways, Looking forward to trying it in the next release of nlp
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t...
28
Issues: Adding a FAISS or Elastic Search index to a Dataset It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nl...
https://github.com/huggingface/datasets/issues/438
New Datasets: IWSLT15+, ITTB
Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ? The tutorial on writing a new dataset loading script is here: https://huggingface.co/nlp/add_dataset.html And the part on how to share a new ...
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/60450...
63
New Datasets: IWSLT15+, ITTB **Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-ima...
https://github.com/huggingface/datasets/issues/438
New Datasets: IWSLT15+, ITTB
Hi @sshleifer, I'm trying to add IWSLT using the link you provided but the download urls are not working. Only `[en, de]` pair is working. For others language pairs it throws a `404` error.
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/60450...
34
New Datasets: IWSLT15+, ITTB **Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-ima...
https://github.com/huggingface/datasets/issues/436
Google Colab - load_dataset - PyArrow exception
+1! this is the reason our tests are failing at [TextAttack](https://github.com/QData/TextAttack) (Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case we'll just wait for you all to update)
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest...
43
Google Colab - load_dataset - PyArrow exception With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running thi...
https://github.com/huggingface/datasets/issues/436
Google Colab - load_dataset - PyArrow exception
Came to raise this issue, great to see other already have and it's being fixed so soon! As an aside, since no one wrote this already, it seems like the version check only looks at the second part of the version number making sure it is >16, but pyarrow newest version is 1.0.0 so the second past is 0!
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest...
59
Google Colab - load_dataset - PyArrow exception With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running thi...
https://github.com/huggingface/datasets/issues/436
Google Colab - load_dataset - PyArrow exception
> Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq Yes definitely
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest...
17
Google Colab - load_dataset - PyArrow exception With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running thi...
https://github.com/huggingface/datasets/issues/435
ImportWarning for pyarrow 1.0.0
This was fixed in #434 We'll do a release later this week to include this fix. Thanks for reporting
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
19
ImportWarning for pyarrow 1.0.0 The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files This was fixed in #434 We'll do a release later this week to include this fix. Thanks for reporting
https://github.com/huggingface/datasets/issues/435
ImportWarning for pyarrow 1.0.0
I dont know if the fix was made but the problem is still present : Instaled with pip : NLP 0.3.0 // pyarrow 1.0.0 OS : archlinux with kernel zen 5.8.5
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
31
ImportWarning for pyarrow 1.0.0 The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files I dont know if the fix was made but the problem is still present : Instaled with pip : NLP 0.3.0 // pyarrow 1.0.0 OS : archlinux with kernel zen 5.8.5
https://github.com/huggingface/datasets/issues/433
How to reuse functionality of a (generic) dataset?
Hi @ArneBinder, we have a few "generic" datasets which are intended to load data files with a predefined format: - csv: https://github.com/huggingface/nlp/tree/master/datasets/csv - json: https://github.com/huggingface/nlp/tree/master/datasets/json - text: https://github.com/huggingface/nlp/tree/master/datasets/text...
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
56
How to reuse functionality of a (generic) dataset? I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create spec...
https://github.com/huggingface/datasets/issues/433
How to reuse functionality of a (generic) dataset?
> Maybe your brat loading script could be shared in a similar fashion? @thomwolf that was also my first idea and I think I will tackle that in the next days. I separated the code and created a real abstract class `AbstractBrat` to allow to inherit from that (I've just seen that the dataset_loader loads the first non...
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
416
How to reuse functionality of a (generic) dataset? I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create spec...
https://github.com/huggingface/datasets/issues/433
How to reuse functionality of a (generic) dataset?
Hi! You can either copy&paste the builder script and import the builder from there or use `datasets.load_dataset_builder` inside the script and call the methods of the returned builder object.
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
29
How to reuse functionality of a (generic) dataset? I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create spec...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
20
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/data/ops/dataset_ops.py#L1623). There, `num_parallel_calls` is turned into a tensor and and fed to `gen_dataset...
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
47
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
Multiprocessing was added in #552 . You can set the number of processes with `.map(..., num_proc=...)`. It also works for `filter` Closing this one, but feel free to reo-open if you have other questions
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
34
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
@lhoestq Great feature implemented! Do you have plans to add it to official tutorials [Processing data in a Dataset](https://huggingface.co/docs/datasets/processing.html?highlight=save#augmenting-the-dataset)? It took me sometime to find this parallel processing api.
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
29
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
Hi @lhoestq I made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `"features"` part of the PAN-X.LANG dataset: ```json "features":{ "word":{ "dtype":"string", "id":null, "_type":"Valu...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
148
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
Hi ! You have to point to your local script. First clone the repo and then: ```python dataset = load_dataset("./datasets/xtreme", "PAN-X.en") ``` The "xtreme" directory contains "xtreme.py". You also have to change the features definition in the `_info` method. You could use: ```python features = nlp.F...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
66
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
Thanks, I am making progress. I got a new error `NonMatchingSplitsSizesError ` (see traceback below), which I suspect is due to the fact that number of rows in the dataset changed (one row per word --> one row per sentence) as well as the number of bytes due to the slightly updated data structure. ```python NonMat...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
130
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
One more thing about features. I mentioned ```python features = nlp.Features({ "words": [nlp.Value("string")], "ner_tags": [nlp.Value("string")], "langs": [nlp.Value("string")], }) ``` but it's actually not consistent with the way we write datasets. Something like this is simpler to read and mor...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
61
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
https://github.com/huggingface/datasets/issues/418
Addition of google drive links to dl_manager
I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . @lhoestq
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
20
Addition of google drive links to dl_manager Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoCo...
https://github.com/huggingface/datasets/issues/418
Addition of google drive links to dl_manager
Oh sorry, I think `_get_drive_url` is doing that. Have you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`? it should work with google drive links.
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
21
Addition of google drive links to dl_manager Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoCo...
https://github.com/huggingface/datasets/issues/414
from_dict delete?
`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. Right now if you want to use `from_dict` you have to install the package from the master branch ``` pip install git+https://github.com/huggingface...
AttributeError: type object 'Dataset' has no attribute 'from_dict'
53
from_dict delete? AttributeError: type object 'Dataset' has no attribute 'from_dict' `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. Right now if you want to use `from_dict` you have to instal...
https://github.com/huggingface/datasets/issues/414
from_dict delete?
> `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. > Right now if you want to use `from_dict` you have to install the package from the master branch > > ``` > pip install git+https://github.com...
AttributeError: type object 'Dataset' has no attribute 'from_dict'
62
from_dict delete? AttributeError: type object 'Dataset' has no attribute 'from_dict' > `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. > Right now if you want to use `from_dict` you have to in...
https://github.com/huggingface/datasets/issues/413
Is there a way to download only NQ dev?
Unfortunately it's not possible to download only the dev set of NQ. I think we could add a way to download only the test set by adding a custom configuration to the processing script though.
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
35
Is there a way to download only NQ dev? Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('n...
https://github.com/huggingface/datasets/issues/413
Is there a way to download only NQ dev?
Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially also others. For us, it will in this case make the difference of using the library or keeping the old downloads of the raw dev datasets. However, I don't know if that fits into your plans with the library ...
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
70
Is there a way to download only NQ dev? Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('n...
https://github.com/huggingface/datasets/issues/413
Is there a way to download only NQ dev?
I don't think we could force this behavior generally since the dataset script authors are free to organize the file download as they want (sometimes the mapping between split and files can be very much nontrivial) but we can add an additional configuration for Natural Question indeed as @lhoestq indicate.
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
50
Is there a way to download only NQ dev? Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('n...
https://github.com/huggingface/datasets/issues/412
Unable to load XTREME dataset from disk
Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
16
Unable to load XTREME dataset from disk Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. ...
https://github.com/huggingface/datasets/issues/412
Unable to load XTREME dataset from disk
I was able to repro. Opening a PR to fix that. Thanks for reporting this issue !
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
17
Unable to load XTREME dataset from disk Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. ...
https://github.com/huggingface/datasets/issues/407
MissingBeamOptions for Wikipedia 20200501.en
Fixed. Could you try again @mitchellgordon95 ? It was due a file not being updated on S3. We need to make sure all the datasets scripts get updated properly @julien-c
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
30
MissingBeamOptions for Wikipedia 20200501.en There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: `...
https://github.com/huggingface/datasets/issues/407
MissingBeamOptions for Wikipedia 20200501.en
I found the same issue with almost any language other than English. (For English, it works). Will someone need to update the file on S3 again?
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
26
MissingBeamOptions for Wikipedia 20200501.en There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: `...
https://github.com/huggingface/datasets/issues/407
MissingBeamOptions for Wikipedia 20200501.en
This is because only some languages are already preprocessed (en, de, fr, it) and stored on our google storage. We plan to have a systematic way to preprocess more wikipedia languages in the future. For the other languages you have to process them on your side using apache beam. That's why the lib asks for a Beam r...
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
58
MissingBeamOptions for Wikipedia 20200501.en There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: `...
https://github.com/huggingface/datasets/issues/406
Faster Shuffling?
I think the slowness here probably come from the fact that we are copying from and to python. @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
51
Faster Shuffling? Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `wri...