title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
TensorBoardLogger not working as expected with accumulate_grad_batches>1
[ "bug", "help wanted", "logger" ]
πŸ› Bug When logging inside training step to TensorBoard and using accumulate_grad_batches > 1 inside pl.Trainer() the behavior is not as expected. With accumulate_grad_batches = 1 everything looks good. With accumulate_grad_batches = 8 the values are reported on the same step. To Reproduce (sorry for not using colab)...
callbacks: some callback hooks can be cleaned up
[ "feature", "help wanted", "refactor", "callback" ]
πŸš€ Feature Motivation There are some duplicate callback hooks which can be removed. ON TRAIN START ON EPOCH START <-- same 1 ON TRAIN EPOCH START <-- same 1 ON BATCH START <-- same 2 ON TRAIN BATCH START <-- same 2 ON BATCH END ON TRAIN BATCH END ON BATCH START ON TRAIN BATCH START ON BATCH END...
Checkpoint hparams.yaml does not save current self.hparms, but only at self.save_hyperparameters
[ "help wanted", "question", "working as intended", "design", "checkpointing" ]
πŸ› Bug I expect that lightning_logs/version_0/hparams.yaml matches self.hparams when Checkpoint is called, or at least matches self.hparams when calling trainer.fit(model). Instead hparams.yaml only contains the arguments given to self.save_hyperparameters() (empty is all arguments). Please reproduce using the BoringM...
After DDP train processes have different best val paths
[ "bug", "help wanted", "priority: 0", "distributed" ]
πŸ› Bug Tied to huggingface/transformers#7852 There is no synchronisation/communication to ensure the model has finished saving before loading. If you look at ddp_spawn/ddp_cpu there is communication to ensure that each process has the same best_val_path stored in the model after save. Run below on multi-gpu: # Copyrig...
Get progress bar in VS Code instead of text/stream based progress bar
[ "feature", "won't fix" ]
Hi, one of the maintainers of the Python extension that provides Jupyter functionality in VS Code. Here's the original issue: https://github.com/microsoft/vscode-python/issues/14476#issuecomment-714895448 What is your question? When running the code in class Jupyter notebook, we get the widget (I think tqdm) progress b...
Remove tensorboard dependency
[ "feature", "help wanted", "won't fix", "discussion" ]
πŸš€ Feature It would be cool if TB was removed as a dependency, as some people (like me) don't really use it. Motivation Tensorboard dependency currently represents close to 50% of the download volume of PL (assuming you start with pytorch installed) - 6.2 MB out of 12.6MB for me, when I install from conda. So removing ...
Saving / loading hparams type in checkpoint
[ "help wanted", "docs" ]
πŸ› Bug There seems to be an issue with saving or loading hyperparams type in checkpoints. Related to #924 (unrelated to #3998). Please reproduce using the BoringModel and post here Here is the BoringModel gist using the snippet from @awaelchli in #924 (running in colab has been unreliable, see #3998) https://gist.githu...
Get help of argparse from docstring.
[ "feature", "help wanted" ]
πŸš€ Feature get help from docstring for argparse. Motivation In code todo: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/utilities/argparse_utils.py#L163
[Docs] redundant whitespace included in debugging section code, new-project.rst
[ "docs" ]
πŸ“š Documentation For typos and doc fixes, please go ahead and: Create an issue. Fix the typo. Submit a PR. Thanks! Typo https://pytorch-lightning.readthedocs.io/en/latest/new-project.html#debugging # train only 20% of an epoch trainer = pl. Trainer(limit_train_batches=0.2) This works, but I think it should be align...
Metrics fail on DP and multiple GPU
[ "feature", "help wanted", "strategy: dp" ]
πŸ› Bug When using a metric such as Accuracy from pytorch_lightning.metrics in machine with 4 GPU and in 'dp' mode, there is an error due to accumulating the metric in different devices. In the case of Accuracy, in line: pytorch-lightning/pytorch_lightning/metrics/classification/accuracy.py ...
Does AccuracyMetric compute accumulated or mean values of each batch?
[ "question" ]
there are two version of calculate accuracy: calculate accuracy of each batch then do mean operation in validation_epoch_end accumulate each batch's correct, total values then compute accuracy once in validation_epoch_end which ones does AccuracyMetric do?
Adding a Metric to LightningModule prevents loading of a checkpoint / weights
[ "bug", "help wanted", "checkpointing" ]
πŸ› Bug Adding a Metric like Accuracy prevents the loading of a .ckpt due to missing keys: RuntimeError: Error(s) in loading state_dict for BoringModel2: Missing key(s) in state_dict: "pl_accuracy.correct", "pl_accuracy.total". Please reproduce using the BoringModel and post here https://colab.research.google.com/dr...
show progressbar only on progress_rank 0 on ddp_slurm
[ "feature", "help wanted" ]
πŸ› Bug The progress bars will show repeatedly when using slurm for multi-nodes To Reproduce Using pytorch_lightning To run this template just do: python generative_adversarial_net.py After a few epochs, launch TensorBoard to see the images being generated at every batch: tensorboard --logdir default import os from ...
Not-yet-existing resume_from_checkpoint for auto-resubmit
[ "feature", "help wanted", "checkpointing" ]
πŸš€ Feature Accept Not-yet-existing resume_from_checkpoint in Trainer for automatic training resume / auto-resubmit. Motivation In cloud ML training services (e.g. Google AI platform training, AWS SageMaker, AWS Batch), there are Job auto-retry feature. If we can specify checkpoint path, Job auto-retry can be used for t...
model.to_onnx() fails if self.example_input_array is a list
[ "feature", "help wanted" ]
πŸ› Bug Using mode.to_onnx() does not work if the defined example array is a list or tuple of multiple inputs. The reason is, that it tries to call inputs.to(device) on the list object: Traceback (most recent call last): File "C:\Users\Tobias\Anaconda3\envs\pytorch_local\lib\site-packages\pytorch_lightning\trainer\tr...
LearningRateMonitor produces inconsistent logs with logging_interval="epoch"
[ "bug", "help wanted", "logger" ]
πŸ› Bug LearningRateMonitor uses step=trainer.current_epoch whereas in other places, it is always step=trainer.global_step. This creates inconsistencies and makes the log hard to process: https://colab.research.google.com/drive/1bucI1oGCc_xNsnP_lvBWTz3x_bauH3o7 [{'lr-SGD': 0.1, 'step': 0}, {'epoch': 0, 'step': 49, 'tra...
WandbLogger _sanitize_callable_params throws AttributeError if param does not have __name__
[ "bug", "help wanted", "logger" ]
πŸ› Bug Using WandB logger throws an error: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/adrian/.conda/envs/lightning/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "/home/adrian/repositorie...
Role of internal `hpc_load` & `hpc_save`
[ "question" ]
❓ Questions and Help What is your question? What is role/responsibility of hpc_load & hpc_save? Is it same with that of restore? Motivation pl has two internal way of save/load, restore way & hpc_load/hpc_save way. They do similar dump/loading, and has different checkpoint selection mechanism. If these two method have...
Pre-computing total number of training steps
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature As it stands, Lightning does not expose the anticipated total number of training steps for a given run. For instance, let's say we specify max_epochs as 100. Our data loader has a total length of 24, our batch size is 4, and we use 2 GPUs. From this we could easily calculate that we'll perform 100 * 24 / (2 ...
loss from progress bar appears to be sum of loss across all GPUs in Lightning 1.0.3
[ "bug", "help wanted", "logger" ]
πŸ› Bug The loss from progress bar during the training appears to be sum of loss across of all GPUs Please reproduce using the BoringModel and post here To Reproduce One can pick any model and dataset and varies the number of GPUs for training. In my example, I have roughly 0.7 loss for 1 GPUs, 1.4 loss for 2 GPUs an...
Does one need to reset Metrics during the end of epoch in Lightning 1.0.3
[ "question" ]
❓ Questions and Help Before asking: What is your question? Given the new metrics in 1.0.0 and later (which I really like!), I have three accuracy metrics for training, validation and test initialized in __init__ function. Do I need to reset them at the end of training and validation epoch given they will be used multi...
Spec for DDP tests
[ "ci" ]
on_epoch logging in validation_step appears to miss the data for the 1st epoch: Lightning 1.0.3
[ "bug", "help wanted", "waiting on author", "logger" ]
πŸ› Bug On_epoch logging in validation_step appears to miss the data for the 1st epoch. My epoch has 313 steps and therefore I expect the first on_epoch logging in validation_step should at step 312 (0-indexed), but I saw it was at step 625, the end of the 2nd epochs. Please reproduce using the BoringModel and post her...
Add tests for .write in step result
[ "help wanted", "ci" ]
Add tests for parsing.py
[ "help wanted", "ci" ]
Enable trainer val_check_interval to be greater than number of the training batches
[ "feature", "help wanted", "design" ]
Currently I can't set val_check_interval greater than number of the training batches Error occurs /site-packages/pytorch_lightning/trainer/data_loading.py", line 203, in reset_train_dataloader raise ValueError( ValueError: `val_check_interval` (1000000) must be less than or equal to the number of the training batch...
Metrics inside a dictionary are not properly moved to GPU
[ "help wanted", "working as intended" ]
πŸ› Bug The following code WORKS: self.accuracy = Accuracy() The following code DOES NOT WORK (throws CPU error): self.metrics['accuracy'] = Accuracy() Please reproduce using the BoringModel and post here https://colab.research.google.com/drive/1oLE2Ts4AkQwd2oLSaz8qNvFdiygyvgrZ?usp=sharing To Reproduce Expected beh...
Remove check_val_every_n_epoch
[ "feature", "help wanted", "good first issue", "refactor" ]
πŸš€ Feature I think check_val_every_n_epoch trainer flag should be removed: val_check_interval already contains* all of its functionality. * The only thing needed would be to interpret val_check_interval=2.0 as check every two epochs. This is just a straightforward extension of the current functionality
Collapse videos in documentation
[ "won't fix", "docs" ]
πŸš€ Feature Hide the documentation videos. Motivation Videos take up a significant portion of screen space in the documentation, making each documentation page much longer than it needs to be. The videos appear to be intended as a learning tool. I believe doc pages are much more often used for quick look-ups than for le...
Can't TorchScript LightningModule when using Metric
[ "bug", "help wanted" ]
πŸ› Bug Please reproduce using the BoringModel and post here able to reproduce it in https://colab.research.google.com/drive/1MscNHxIc_LIbZxALHbZOAkooNu0TzVly?usp=sharing To Reproduce Expected behavior Able to torchscript a Lightning moduel no matter Metric is used or not It seems hard to make Metric torchscriptable ...
Support teardown for Lightning DataModule
[ "feature", "help wanted", "won't fix", "data handling" ]
πŸš€ Feature teardown as a hook can be useful for data modules. Motivation This could be used for: Clean up downloaded data after training finishes Closing any open connections a dataloader makes etc Pitch This has natural connections to prepare_data and setup and could be implemented very similarly to how those are su...
NCCL error using DDP and PyTorch 1.7
[ "bug", "help wanted", "priority: 0", "distributed", "3rd party" ]
πŸ› Bug Getting this error when attempting to use ddp with the "getting started" autoencoder example: Stack Trace: GPU available: True, used: True TPU available: False, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2] LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2] initializing ddp: GLOBAL_RANK: 1, MEMBER:...
Gradient accumulation fails with fp16 precision
[ "bug", "help wanted" ]
πŸ› Bug Setting accumulate_grad_batches > 1 and precision = 16 causes the following error: RuntimeError: unscale_() has already been called on this optimizer since the last update(). Please reproduce using the BoringModel and post here https://colab.research.google.com/drive/1_7pxqPlpc79k0VYlRdtRXE0JQbhSBWHy?usp=sharing...
Colab TPU Exception in device=TPU:4: Could not run 'torchvision::nms' with arguments from the 'XLA' backend
[ "bug", "help wanted", "accelerator: tpu", "3rd party" ]
πŸ› Bug Getting this error on Colab: Exception in device=TPU:4: Could not run 'torchvision::nms' with arguments from the 'XLA' backend. 'torchvision::nms' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. CPU: r...
add fsspec suport to tuner
[ "feature", "help wanted", "good first issue", "trainer: tune" ]
see #4424
Some test directories clash with standard library modules
[ "bug", "help wanted", "ci" ]
Namely: https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests/trainer/warnings https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests/trainer/logging (I haven't checked if there are more) This is problematic, for example, when you run tests using a tool like PyCharm. This is because l...
Random CUDA OOM error when starting SLURM jobs
[ "bug", "help wanted", "won't fix", "priority: 0", "environment: slurm" ]
πŸ› Bug When submitting jobs to SLURM, some jobs (around 1-2% of them) will randomly encounter a CUDA OOM error during the setup prior to training. I can confirm it's not an issue with the configuration of the job vs hardware itself, since I can resubmit the exact same job script and it will work. I also know that my re...
wave2vec integration in NeMo
[]
to promote NeMo/lightning further in a medium article showing accessibility and ease to make audio research headway
AttributeError: 'dict' object has no attribute 'get_epoch_log_metrics'
[ "question" ]
I have created a lightning module which is working fine for single validation dataset but throws following error while using multiple validation dataset: self._log_on_evaluation_epoch_end_metrics(epoch_logs) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/pytorch_lightning/trainer/con...
Data loading hangs before first validation step
[ "help wanted", "won't fix", "waiting on author" ]
πŸ› Bug After training epoch, before the first validation step, training gets stuck somewhere in the data loaders (I think). I can't provide a reproduction script unfortunately: Getting the training into the specific situation takes a long time (must train for long enough for the situation to arise). I train on 4x 1080 ...
Training_step outputs not propagated
[ "docs" ]
cc @ananthsub @awaelchli @justusschock Hi, after updating to version 1.0.4, i think below approach seems to be not working as desired def training_step(...): return {'loss': loss, 'random_thing': [1, 'a', Tensor(), ...]} def training_epoch_end(self, training_step_outputs): for d in training_step_outputs: ...
Is there a way to supress all printing and warnings in the Trainer?
[ "question" ]
Hello, I am aware of the progress_bar_refresh_rate and weight_summary parameters, but even when I disable them I get these GPU warning-like messages: I would like to disable all warnings and printings from the Trainer, is this possible? Why? because I want to perform several training operations in a loop and monitor t...
stepwise learning rate scheduler
[ "question" ]
Hello, I am trying to manage LR scheduling with my training with pL. Main methods are relying on epoch-wise updates on LR, is there a way to reduce this to step-wise? OS: [Linux] Packaging [pip] Version [1.0.4]
calling `self.log(..., on_step=True)` in optimizer_step fails
[ "feature", "help wanted" ]
πŸ› Bug calling self.log(..., on_step=True) in optimizer_step fails. Expected behavior It should get treated same way if I were to .log() in training_step Additional context I believe it's connected to #4439. Tried debugging and hotfixing it but the whole .log() behavior (controlled by self._current_fx_name) seems not t...
ModelCheckpoint filenames and metric names containing / (slash)
[ "bug", "help wanted", "checkpointing" ]
πŸ› Bug The current behaviour of ModelCheckpoint is problematic when the checkpoint's name includes a metric whose name contains a slash character. Since the implementation always includes the name of the metric along with its value, a format specifier like {some/name} results in the checkpoint being stored in a subdire...
Help with understanding unknown 'c10::Error' thrown during DDP training
[ "bug", "help wanted" ]
πŸ› Bug This is not going to be a good and concise description, since I have no way of reproducing the error with 100% certainty. However, more and more I get these types of c10::Error instances thrown during training, both during the training phase and during the validation phase. I am running DDP on a single-node wit...
import pytorch_lightning as pl: Segmentation fault (core dump)
[]
Pytorch_lightning immediately segfaults upon loading at the first executable line in the file: import pytorch_lightning as pl apparently it's doing this upon attempting to load /opt/anaconda3/lib/python3.8/site-packages/torchtext/_torchtext.so, which exists and is not protected, after not crashing when it loaded /opt/a...
Allow Trainer to accept dictionary as input?
[ "feature", "question", "discussion" ]
I searched issues and the forum but couldn't find anyone who mentioned this, so here I'm asking: can we allow Trainer to accept dictionaries as an input argument? This came up as I'm writing a script to run experiments under various configurations. For each experiment, I'm passing a configuration file (in JSON) into an...
Logging with "self.log" in training_step does not create any outputs in progress bar or external Logger when loss isn't returned
[ "bug", "help wanted", "priority: 0", "logger" ]
πŸ› Bug I think the newly introduced log function function does not log properly while being used in the training_step. The same code in validation_step creates the desired results. def training_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) self.log(...
Validation step is skipped when on_train_batch_start returns -1
[ "help wanted", "working as intended" ]
πŸ› Bug It seems that when on_train_batch_start returns -1 the validation step is skipped. Expected behavior The trainer should skip the training epoch and start the evaluation. - pytorch-lightning version 1.0.4 - PyTorch Version (e.g., 1.0): 1.7 - OS (e.g., Linux): Linux - How you installed PyTorch (`conda`, `pip`,...
Why metric inherits torch.nn.Module?
[ "question" ]
If i define a complex metric and the computation may convert torch.tensor to numpy. Can this metric use in ddp? If i want train model with train_dataloader in ddp and validate val_dataloader in single model with customized metric, how to do it?
using autograd in neural network calculation breaks the validation step
[ "bug", "help wanted" ]
πŸ› Bug Hi, just a small bug report, maybe this can be a feature in the future. Assuming you want to have a derivative in your neural network as output, the option: torch.set_grad_enabled(False), which is activated during validation, breaks the PyTorch lightning module. So if you want to train and validate a model like ...
Trainer.test() fail when trys to log_hyperparameter
[ "bug", "help wanted", "priority: 2" ]
πŸ› Bug This the error code Traceback (most recent call last): File "test.py", line 31, in <module> cli_main() File "test.py", line 28, in cli_main trainer.test(model, test_dataloaders=dm.test_dataloader()) File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\trainer\trainer.py", ...
Expose lr_scheduler to pl.LightningModule as a property (self.scheduler)?
[ "question" ]
What is your question? Dear all, Is lr_scheduler (or an array of them) exposed to pl.LightningModule as self.scheduler or something similar ? I only see self.trainer but self.scheduler would really complement the trainer! What have you tried? I tried looking for self.scheduler (and different variants) in the source / P...
Packages fails to import when system site-packages contain a different version
[ "bug", "help wanted", "3rd party" ]
πŸ› Bug The package mixes imports from the user's site-packages and system's site-packages, which leads to import errors or other unexpected behavior. To Reproduce Install older version of pytorch-lightning as system package, e.g.: sudo pip3 install pytorch-lightning==0.8.0 Install newer version of pytorch-lightning...
How to use lightning without any explicit train_dataloader?
[ "question", "won't fix" ]
❓ Questions and Help What is your question? I'm working on Neural Style Transfer (the original paper by Gatys et. al.). The method is essentially specific to one style and one content image only, which can be loaded in the __init__() method itself. And there's no other data set required (training or validation). Once ...
Test drone on pytorch 1.7, 1.8
[ "help wanted", "ci" ]
extend teste PT versions with multi-GPU
TypeError: __init__() got an unexpected keyword argument 'logger'
[]
I am new to developing with Pytorch-Lightning. I have constructed a VideoClassifier following the examples and templates. I am currently running into the following error when I try to run the code TypeError: init() got an unexpected keyword argument 'logger' Back tracing the error also does not give any pointers on how...
Profiler is not reset after calling trainer.tune() with auto_lr_find=True
[ "bug", "help wanted", "trainer: tune", "priority: 1" ]
πŸ› Bug When using SimpleProfiler together with the Learning Rate Finder, only the timings from the call to train within the Learning Rate Finder are logged. For the actual training, no timings are recorded. Please reproduce using [the BoringModel and post here] https://colab.research.google.com/drive/1FUOW-A7gJk1fuR7OD...
Videos in the documentation quality change for lower speed connections
[ "docs" ]
πŸš€ Feature adding the option to change the quality of the tutorial videos in the documentation page. Motivation the videos inside the documentation seem very useful however every time I try to watch them I can't because they have a high quality and my internet access can't load it in the proper time. Alternatives Simpl...
Error when disabling an optimizer with native AMP turned on
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug When running my Lightning code with: fp16 native AMP Multiple optimizers One of the optimizers disabled (in this case by returning None for it in training_step) I'm getting the following stacktrace: Traceback (most recent call last): File "./train_stage1.py", line 353, in <module> trainer.fit(model) Fi...
save_hyperparameters() doesn't work with decorated __init__()
[ "bug", "feature", "help wanted", "checkpointing" ]
πŸ› Bug save_hyperparameters() breaks when init() is decorated https://colab.research.google.com/drive/1njXP32G3FSg4nWvwo7kuhVr6iqH3eabQ?usp=sharing Expected behavior hyperparameters should be saved Environment * CUDA: - GPU: - Tesla T4 - available: True - version: 10.1 * Packages: - numpy: ...
Does log_every_n_steps work in validation_step?
[ "question" ]
I find that golbal_step is the training_step, if i want to log something every k validation_steps, how to implement? log_every_n_steps works in trains_loop but does not work in validation loop. Should add global_traing_step, global_val_step and global_test_step? self.log only support on_step and on epoch, can this api ...
Testing metrics on all batches and all ranks
[ "question" ]
What is your question? I don't understand why under the comment "check on all batches on all ranks" in function _class_test only NUM_BATCHES examples are taken whereas all preds and targets contain NUM_BATCHES * worldsize examples. Thank you very much for your help and excuse me if the question is stupid.
Create an Enum for tracking Train, validation, test stage
[ "feature", "help wanted", "good first issue", "refactor", "priority: 2" ]
πŸš€ Feature Motivation The goal is to create single point of truth about stages for the trainer, so we don't get typo errors. Pitch Alternatives Additional context
prefix argument in loggers
[ "feature", "logger" ]
πŸš€ Feature A simple prefix argument in loggers, similar to what we have in ModelCheckpoint that can prepend this value in metric names. Motivation One use-case where I need this is while doing Kfold training. For now, I have to do it manually by updating the metric_name in self.log, but this can be done by the loggers ...
Torch-Summary Integration to replace ModelSummary
[ "feature", "help wanted", "won't fix", "discussion" ]
πŸš€ Feature, Motivation, Pitch Hello, I am the current maintainer of torch-summary, which is a rewrite of yet another torchsummary library. Currently, ModelSummary is a builtin PyTorch-Lightning feature, but it is limited in scope and support, especially in more complex models. The proposal here, seconded by @carmocca ,...
Replace AttributeDict in with dict in checkpoint
[ "bug", "checkpointing", "priority: 1" ]
πŸš€ Feature When we save hyperparameters to the checkpoint, we save them in a Lightning datastructure called AttributeDict. We should replace this with a regular dict when saving the checkpoint. Motivation Allows loading checkpoints with torch.load in environments where Lightning is not installed. Pitch Convert Attribut...
Class version of AUROC metric
[ "feature", "help wanted", "design" ]
πŸš€ Feature Class version of AUROC metric following the v1.x.x standard, so we can do: auroc = AUROC(num_classes=4) auroc(pred, target) auroc.compute() Motivation Class based Metrics can be nicer to use in the PyTorch Lightning workflow, so why not add one for AUROC? Pitch AUROC will automatically use the correct fun...
Clarify ModelCheckpoint behavior when self.monitor == None
[ "won't fix", "docs" ]
πŸ“š Documentation In ModelCheckpoint, setting self.monitor to None (as for the built-in checkpoint callback in the Trainer) causes it to default to either val_loss or checkpoint_on, as per L459. This, in turn, overrides the behavior described in the docs, which is caused by L500. Incriminated docs at L53: monitor: quan...
Gpu memory leak with self.log on_epoch=True
[ "bug", "help wanted", "priority: 0", "logger" ]
pl 1.0.5 Using new logging api I want to log a metric in LightningModule self.log(";;;;;;;;;;;;;;;;;;;", 1, on_step=False, on_epoch=True) This is a dummy example but it is sufficient to add to LightningModule's training_step to cause a memory leak on gpu. What could go wrong? We want to log a metric which is not even ...
Attribute finder doesn't check datamodule when hparams is set
[ "bug", "help wanted" ]
πŸ› Bug Issue is seen here too: #3233 (comment) See: https://github.com/rnett/pytorch-lightning/blob/6e5f232f5cec2b5e635ae34fa365c6b969d0902e/pytorch_lightning/utilities/parsing.py#L177 In pytorch_lightning.utilities.parsing.lightning_hasattr (and getattr and setattr), because hasattr(model, 'hparams') is used in the el...
TypeError: __init__() got an unexpected keyword argument 'max_nb_epochs'
[ "question" ]
from pytorch_lightning import Trainer from argparse import ArgumentParser # from research_seed.digit_recognition.mnist import MNISTRecognizer # def main(): model = MNISTRecognizer() trainer = Trainer(max_nb_epochs=50,gpus=1) trainer.fit(model) pytorch-lightning==0.9.1rc1 There is error: TypeError: init() got an unex...
IOU Class Metric Module
[ "feature", "help wanted" ]
πŸš€ Feature Is there a reason why IOU doesn't have a class metric module (currently the only classification class metrics implemented are: Accuracy, Precision, Recall, Fbeta)? Is this already on the roadmap? I implemented a version below. Does this look good? If so, should I submit a PR? I was initially worried that the...
progress bar refresh_rate prevents old bars being cleared
[ "bug", "help wanted" ]
If you set refresh_rate=100 then the sanity check and validation bars continue being displayed forever even though leave=False. Also the validation bars only show part complete. I note also that the mininterval parameter for tqdm bars is ignored.
Native AMP effectively broken when rewriting the optimizer_step function
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug If I turn on the native AMP (--precision 16) and modify optimizer_step like it's recommended in the docs (https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#step-optimizers-at-arbitrary-intervals), the training stops converging. The issue is caused by pytorch-lightning/pytorch_ligh...
Mergify never merge PullRequest
[ "working as intended" ]
πŸ› Bug .mergify.yml requires over 54 tests pass, but CI All checks have passed results in 53 successful checks. Automatic merge never triggered. To Reproduce Make pull request which pass all test. Mergify unexpectedly do not merge this PR. Expected behavior Mergify automatically merge the PR. Environment master lates...
Tensorboard logging only part of the logged metrics and images.
[ "question", "won't fix" ]
I am trying to use Tensorboard and log images generated by a GAN at the end of each epoch. To do that, I set up a callback that automatically logs an image into tensorboard. Tensorboard is showing the image that was generated after the first epoch only. In addition, I am also trying to log some losses such as gradient ...
upgrading PL via pip uninstalls pytorch-1.8
[ "bug", "help wanted", "3rd party", "priority: 1" ]
πŸ› Bug To Reproduce $ pip install pytorch-lightning -U [...] Installing collected packages: torch, pytorch-lightning Attempting uninstall: torch Found existing installation: torch 1.8.0.dev20201106+cu110 Uninstalling torch-1.8.0.dev20201106+cu110: Successfully uninstalled torch-1.8.0.dev20201106+cu110 ...
How to load to from a checkpoint to same device when pretrained encoder was used
[ "question" ]
❓ Questions and Help I implemented a ClassificationNet (see below) that's using a pretrained encoder. After training, I'm trying to load it to CPU using ClassificationNet.load_from_checkpoint(pth, map_location=torch.device("cpu"), but since map_location in get_encoder is None, the encoder tries to load to GPU. How can ...
How to change the Datamodule during training with a callback?
[ "question", "won't fix", "data handling" ]
What is your question? How to change the Datamodule during training with a callback? More details: I am looking for a way to reinitialized my Datamodule with different parameter, I am currently sending the height of my images as argument to my datamodule and I want to change this height at some point during training, t...
Test step to handle non-scalar outputs
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Handle output from test loop not being a single value. Motivation I often need to use a callback to do some processing on test values (to make plots, etc.), which I like to separate from the core module code. In this case, I would like to use on_test_batch_end to build a list of predicted values, calculated ...
self.log does not work in on_train_batch_start/end hooks
[ "bug", "help wanted" ]
πŸ› Bug Logging with self.log doesn't seem to work properly in the on_train_batch_start and on_train_batch_end model hooks. Specifically: when put in on_train_batch_start it crashes because self._current_fx_name is set to validation_epoch_endwhich seems like incorrect behaviour. (It seems like it should be set to trai...
Strange issue with DataParallel
[ "bug", "help wanted", "won't fix", "strategy: dp", "priority: 2" ]
πŸ› Bug I'm having an issue with tensors being on different devices when using distributed_backend=dp. But it only occurs under what seems like some pretty specific circumstances. I can only reproduce the bug when I have done BOTH of the following: replaced the forward method of the internal network (see the code samp...
Memory leak using AMP and transformers
[ "bug", "help wanted", "3rd party" ]
Context can be found here: huggingface/transformers#8403 10x memory consumption increase with native amp on PL I'm running under debugger pt15+apex side by side with pt16+native amp, huggingface/transformers#8403 and found the first issue of objects not being freed I ruled out GradScaler I think there is a huge memory ...
Weird logging to console behavior.
[ "bug", "help wanted", "logging" ]
πŸ› Bug Logging to console prints some stuff twice, and does not output my custom logging. Verbose EarlyStopping does also not output to console: |segmentation|base|py-3.8.5 Stanley in ~/Repos/segmentation Β± |master U:1 ?:1 βœ—| β†’ python train.py GPU available: True, used: True INFO:lightning:GPU available: True, used: ...
Keeping DDP override in sync with upstream torch
[ "discussion", "distributed", "refactor" ]
From @ananthsub: how should Lightning keep its DDP override in sync with the upstream torch DistributedDataParallel? these implementations have now diverged. I think this leads to performance degradations with Lightning + gradient accumulations, since the require_backward_grad_sync attribute isn't checked before the ba...
Evaluation over the validation set
[ "feature", "refactor", "design" ]
What is the recommended way of performing just one evaluation over the validation set? Basically, I'm looking for the equivalent of trainer.test(...) but for the validation set. Maybe this is possible using trainer.fit(...) and some combination of arguments to Trainer.__init__?
how to define my own sampler in ddp training
[ "question" ]
When using ddp as the accelerator, i want to define my own sampler in dataloader, how to do it? Noramally, i do it by overriding _collate_fn. But in pytorch-lightning, it seems that is not correct.
Unneeded elements in state dict
[ "bug", "help wanted", "checkpointing" ]
State_dict keys after training resnet18: dict_keys(['model.conv1.weight', 'model.bn1.weight', 'model.bn1.bias', 'model.bn1.running_mean', 'model.bn1.running_var', 'model.bn1.num_batches_tracked', 'model.layer1.0.conv1.weight', 'model.layer1.0.bn1.weight', 'model.layer1.0.bn1.bias', 'model.layer1.0.bn1.running_mean', 'm...
Keep the setting of user created DataLoader in replacing DistributedSampler
[ "feature", "help wanted" ]
πŸš€ Feature Motivation As mention at #2789, the default behavior of replace_sampler_ddp is creating a new DistributedSampler. The shuffle setting depends on the kind of dataloader (train or val/test dataloader). However, this behavior override the setting of user defined dataloader, such as shuffle or drop_last. A mo...
LR scheduler lags behind after resume_from_checkpoint
[ "bug", "help wanted", "checkpointing", "priority: 1" ]
πŸ› Bug MultiStepLR lags behind by one epoch after resuming from checkpoint in Trainer. See this collab to reproduce. In the example below, class BoringModel(LightningModule): ... def training_step(self, batch, batch_idx): print(f'Epoch {self.trainer.current_epoch} / Step {self.trainer.global_step}: lr {...
[Trainer] flush_logs_every_n_steps not working
[ "bug", "help wanted", "won't fix", "priority: 1", "logging" ]
πŸ› Bug HI all ! thanks a lot for this great module :) Trainer has this neat argument flush_logs_every_n_steps, and does indeed take it into account before calling logger.save() in training_loop.py. Yet, the LoggerConnector flushes at each log_metrics call by calling self.trainer.logger.save(). Is that the expected beha...
Gradient Clipping w/ Multiple Optimizers
[ "question", "won't fix" ]
❓ Questions and Help Before asking: No docs helped. What is your question? I have a GAN with two optimizers, one generator one discriminator. I would like to clip only generator parameters. What is the cleanest way to accomplish this? Currently just using standard training_step and configure_optimizers with automatic o...
What is the difference of the two function, train_epoch_end and on_train_epoch_end
[ "question" ]
It seems that don't have some difference. Hope someone can explain this for me. Thanks a lot.
load_from_checkpoint not working when using save_hyperparameters(conf)
[ "bug", "help wanted" ]
πŸ› Bug Following the docs (https://pytorch-lightning.readthedocs.io/en/latest/hyperparameters.html#lightningmodule-hyperparameters), precisely point 4.: class Example(LightningModule): def __init__(self, cfg, *args, **kwargs): super().__init__() self.save_hyperparameters(cfg) (...) Example(dict(k...
self.log on validation_step is broken on pre 1.1 [nightly]
[ "bug", "help wanted", "priority: 0" ]
https://colab.research.google.com/drive/1tSphAIaCdy3tC9Lzhe1GEK_fH_0a6oYj?usp=sharing
What are the `outputs` in the `on_train_batch_end` callback?
[ "question" ]
❓ Questions and Help Before asking: Try to find answers to your questions in the Lightning Forum! Search for similar issues. Search the docs. What is your question? For my application, I need to save the raw outputs of the model to disk for every training and validation example. I think a callback is the right thing...
batch size behaves strange for ddp
[ "bug", "help wanted" ]
πŸ› Bug I am running SimCLR code. I notice the batch size plays an important role for contrastive learning, so I want to increase batch size. I can run batch_size=64, on a single 2080Ti. When I use 4 GPUs by setting distributed_backend='ddp'. I still have to set batch_size=64, since each GPU is fully occupied by this se...
trainer.test(datamodule=dm) stores reference to wrong checkpoint
[ "bug", "help wanted" ]
πŸ› Bug When finetuning from saved weights in bolts, trainer.test() picks up reference to checkpoints which have already been deleted or not yet created. Checkpoint created using default trainer options, no callbacks added from the user's side. Please reproduce using [the BoringModel and post here] Not sure how to repro...