sieann commited on
Commit
57f596c
·
verified ·
1 Parent(s): b313700

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -8
README.md CHANGED
@@ -65,15 +65,17 @@ categorized in "urban", "nature", and "urban and nature". A traffic sign warning
65
  background, while a sign warning of pedestrians is probable to be placed in an urban context. An uncorrelated background, however, means that the background is randomly
66
  chosen and thus not semantically linked to the depicted traffic sign class.
67
 
68
- For dataset generation, we utilized our parameterizable rendering pipeline from our work on the <em>Synset Signset Germany</em> dataset. The pipeline is based on the
69
- Fraunhofer simulation platform [OCTAS](https://octas.org/). The dataset consists of six subdatasets: correlated and uncorrelated backgrounds cross the camera variation
70
- stages frontal, medium and high. Each of these datasets contains 82 classes of traffic signs with 1,100 images per class, resulting in 90,200 images per dataset, summing
71
- up to a total of 541,200 images. The images were rendered with the rasterization-based engine [OGRE3D](https://www.ogre3d.org/).
72
 
73
  ## Citation
74
 
75
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
76
 
 
 
77
  **BibTeX:**
78
 
79
  @inproceedings{measuring_effect_of_background_sielemann_2025,
@@ -83,18 +85,38 @@ up to a total of 541,200 images. The images were rendered with the rasterization
83
  year={2025}
84
  }
85
 
86
-
87
  **APA:**
88
 
89
- Sielemann, A., Barner, V., Wolf, S., Roschani, M., Ziehn, J., and Beyerer, J. (2025). <br>
90
- Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception. <br>
91
- In 2025 IEEE International Automated Vehicle Validation Conference (IAVVC).
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  ## Uses
94
 
95
  The dataset was designed for the investigation of the effect of background correlations on the classification performance and the spatial distribution of important
96
  classification features within the task of traffic sign recognition.
97
 
 
 
 
 
 
 
 
 
 
 
98
  ### Out-of-Scope Use
99
 
100
  <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
@@ -105,8 +127,99 @@ The dataset should not be used for critical applications, particularly high-risk
105
  to evaluate whether it is "relevant, sufficiently representative, and to the best extent possible free of errors and complete
106
  in view of the intended purpose of the system." No such claim is not made with the publication of this dataset.
107
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
  ## Bias, Risks, and Limitations
109
 
 
 
 
 
 
 
 
 
 
 
 
110
  ### Recommendations
111
 
112
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
65
  background, while a sign warning of pedestrians is probable to be placed in an urban context. An uncorrelated background, however, means that the background is randomly
66
  chosen and thus not semantically linked to the depicted traffic sign class.
67
 
68
+ For dataset generation, we utilized our parameterizable rendering pipeline from our work on the [Synset Signset Germany](https://huggingface.co/datasets/FraunhoferIOSB/Synset-Signset-Germany)
69
+ dataset. The pipeline is based on the Fraunhofer simulation platform [OCTAS](https://octas.org/). The dataset consists of six subdatasets: correlated and uncorrelated
70
+ backgrounds cross the camera variation stages frontal, medium and high. Each of these datasets contains 82 classes of traffic signs with 1,100 images per class, resulting
71
+ in 90,200 images per dataset, summing up to a total of 541,200 images. The images were rendered with the rasterization-based engine [OGRE3D](https://www.ogre3d.org/).
72
 
73
  ## Citation
74
 
75
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
76
 
77
+ To cite this dataset in your scientific work, please use the following bibliography entry:
78
+
79
  **BibTeX:**
80
 
81
  @inproceedings{measuring_effect_of_background_sielemann_2025,
 
85
  year={2025}
86
  }
87
 
 
88
  **APA:**
89
 
90
+ Sielemann, A., Barner, V., Wolf, S., Roschani, M., Ziehn, J., and Beyerer, J. (2025). <br>
91
+ Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception. <br>
92
+ In 2025 IEEE International Automated Vehicle Validation Conference (IAVVC).
93
+
94
+ In case of copying and redistributing, or publishing an adapted version of our dataset, please provide the name of our dataset, the creator names, a copyright notice,
95
+ a link to this website, a license notice with a link to the license, and, if changes were made, a disclaimer notice, and a short description of the applied changes.
96
+ For example, as follows:
97
+
98
+ This work is based on Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception
99
+ by Anne Sielemann, Valentin Barner, Stefan Wolf, Masoud Roschani, Jens Ziehn, and Juergen Beyerer,
100
+ © 2025 Fraunhofer IOSB, All rights reserved.
101
+ Link: https://synset.de/datasets/synset-signset-ger/background-effect/
102
+ Licence: CC BY 4.0, https://creativecommons.org/licenses/by/4.0/
103
+ Disclaimer: The original authors are neither affiliated nor responsible for any applied changes.
104
 
105
  ## Uses
106
 
107
  The dataset was designed for the investigation of the effect of background correlations on the classification performance and the spatial distribution of important
108
  classification features within the task of traffic sign recognition.
109
 
110
+ ### Direct Use
111
+
112
+ <!-- This section describes suitable use cases for the dataset. -->
113
+
114
+ The dataset is intended for the following use cases:
115
+ - Training ML models for the task of German traffic sign recognition.
116
+ - Analyzing the difference between the synthetic dataset and real-world traffic sign recognition datasets, especially the closely related
117
+ [GTSRB](https://ieeexplore.ieee.org/abstract/document/6033395) dataset.
118
+ - Investigating the effects of background correlations.
119
+
120
  ### Out-of-Scope Use
121
 
122
  <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
127
  to evaluate whether it is "relevant, sufficiently representative, and to the best extent possible free of errors and complete
128
  in view of the intended purpose of the system." No such claim is not made with the publication of this dataset.
129
 
130
+ ## Dataset Structure
131
+
132
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
133
+
134
+ The Synset Background Effect Datasets include six dataset variants:
135
+
136
+ 1. Correlated, frontal
137
+ 2. Correlated, medium
138
+ 3. Correlated, high
139
+ 4. Uncorrelated, frontal
140
+ 5. Uncorrelated, medium
141
+ 6. Uncorrelated, high
142
+
143
+ Thereby, correlated means, that the used environment maps for image-based lighting are correlated to the depicted traffic sign class, while in the uncorrelated case the
144
+ environment maps are randomly chosen for each image (independent from the traffic sign class). Frontal, medium, and high refer to the level of camera variation. For the
145
+ frontal case, all traffic signs are soley pictured in frontal perspective without any camera variation. For the remaining two levels, the camera roll, pitch, and yaw
146
+ angles are normally distributed with mean equal to zero. For medium, the standard deviations are set to 1.5° (roll), 5.0° (pitch), and 13.333° (yaw). In case of high,
147
+ the standard deviations are increased to 3.0° (roll), 10.0° (pitch), and 26.666° (yaw).
148
+
149
+ Each of these datasets contains 82 classes of traffic signs with 1,100 images per class, resulting in 90,200 images per dataset, summing up to a total of 541,200 images.
150
+ However, in addition to each of these raw images (i.e., the simulated camera image) we provide a semantic segmentation image, a mask image, and metadata about the traffic
151
+ sign status (orientation, upper signs, lower signs, etc.), the environment (daytime, contrast, location, etc.), and the imaging effects (noise level, motion blur strength,
152
+ aec error, etc.).
153
+
154
+ The datasets provide exemplary training and validation splits (500 training and 600 testing images per dataset and class).
155
+
156
+ ## Dataset Creation
157
+
158
+ ### Curation Rationale
159
+
160
+ <!-- Motivation for the creation of this dataset. -->
161
+
162
+ The use case of traffic sign recognition has the advantages of, on the one hand, representing a well-understood and established task that provides a wide range of
163
+ publicly available datasets and applicable models. On the other hand, it remains the subject of active research, in particular, to address challenges such as corner
164
+ cases and weather conditions, and it has practical relevance, for example, for driver assistance systems, automated driving, and mapping. Since new traffic signs are
165
+ constantly being released (2020 in Germany) and the coverage of existing signs in publicly available datasets is still limited for a distinction of less common classes,
166
+ the demand for both training and testing data still persists.
167
+
168
+ The dataset was designed, to enable the investigation of background effects on the classification performance and neural network attention.
169
+
170
+ ### Source Data
171
+
172
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
173
+
174
+ - The dataset was generated in the [OCTAS®](https://octas.org/) simulation framework, by using rasterization trough the [OGRE](https://ogre3d.org) engine.
175
+
176
+ - The traffic sign template images, which are used as input to the GAN-based texture synthesis, stem from the [Wikipedia overview of German traffic signs](https://de.wikipedia.org/wiki/Bildtafel_der_Verkehrszeichen_in_der_Bundesrepublik_Deutschland_seit_2017).
177
+
178
+ - Image-based lighting (IBL) uses 140 environment maps from [PolyHaven](https://polyhaven.com/).
179
+
180
+ - The 3D geometry of the tree that serves as an occlusion object originates from [PolyHaven](https://polyhaven.com/).
181
+
182
+ #### Who are the source data producers?
183
+
184
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
185
+
186
+ - [PolyHaven](https://polyhaven.com/), as the provider of the environment maps for image-based lighting (IBL) and the 3D tree object, is an online library for open (CC0)
187
+ 3D assets provided by different authors.
188
+
189
+ - [Wikipedia](https://de.wikipedia.org/), one of the largest free multilingual open-content encyclopedias, includes the complete list of existing German traffic signs
190
+ and their template images.
191
+
192
+ ### Annotations
193
+
194
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
195
+
196
+ #### Annotation process
197
+
198
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
199
+
200
+ All annotations, including masks, segmentation images, camera parameters and artifacts, and environmental conditions, are based on ground truth data created
201
+ as part of the scene generation / rendering process. Semantic segmentation images were rendered using the [Ogre](https://www.ogre3d.org/) rendering engine plugin for
202
+ [OCTAS®](https://octas.org/), which provides rasterization / shading-based image generation. The environment labels stem from [PolyHaven](https://polyhaven.com/).
203
+
204
+ #### Personal and Sensitive Information
205
+
206
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
207
+
208
+ The dataset contains no data that might be considered personal, sensitive, or private.
209
+
210
  ## Bias, Risks, and Limitations
211
 
212
+ - **Traffic Signs:** The wear and tear generation is limited to artifacts such as color fading, scratches, screw holes, and sticker residues.
213
+ Complex stickers, graffiti, or dirt are not included. Retroreflector patterns are excluded, and retroreflection is not simulated.
214
+ The traffic signs are solely mounted on metallic traffic sign poles.
215
+
216
+ - **Environment:** Environmental variation includes no adverse weather conditions (snow, raindrops, fog, ...).
217
+
218
+ - **Occlusions:** All included occlusions or shadows stem from a single 3D tree geometry.
219
+
220
+ - **Camera:** Only one set of intrinsic camera parameters is used, and only a single camera lens type (based on a Tamron M112FM35 35 mm lens) is simulated.
221
+ It can be assumed that the set of simulated imaging artifacts is not complete.
222
+
223
  ### Recommendations
224
 
225
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->