Datasets:
review_point stringlengths 45 642 | paper_id stringlengths 10 19 | venue stringclasses 15
values | focused_review stringlengths 200 10.5k | batch int64 2 10 | actionability dict | actionability_label stringclasses 5
values | actionability_label_type stringclasses 1
value | id int64 31 1.53k |
|---|---|---|---|---|---|---|---|---|
- Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. me... | ARR_2022_65_review | ARR_2022 | 1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be com... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 31 |
- Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers? I thank the authors for their response. | ACL_2017_726_review | ACL_2017 | - Claims of being comparable to state of the art when the results on GeoQuery and ATIS do not support it. General Discussion: This is a sound work of research and could have future potential in the way semantic parsing for downstream applications is done. I was a little disappointed with the claims of “near-state-of-th... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 33 |
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details. | ACL_2017_818_review | ACL_2017 | 1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technica... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 37 |
- The abstract is written well and invokes intrigue early - could potentially be made even better if, for "evaluating with gold answers is inconsistent with human evaluation" - an example of the inconsistency, such as models get ranked differently is also given there. | ARR_2022_227_review | ARR_2022 | 1. The case made for adopting the proposed strategy for a new automated evaluation paradigm - auto-rewrite (where the questions that are not valid due to a coreference resolution failure in terms of the previous answer get their entity replaced to be made consistent with the gold conversational history) - seems weak. W... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 44 |
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results. | ACL_2017_699_review | ACL_2017 | 1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
2. The evaluation process shows that the current system (w... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 46 |
- In figure 5, the y-axis label may use "Exact Match ratio" directly. | ARR_2022_113_review | ARR_2022 | The methodology part is a little bit unclear. The author could describe clearly how the depth-first path completion really works using Figure 3. Also, I'm not sure if the ZIP algorithm is proposed by the authors and also confused about how the ZIP algorithm handles multiple sequence cases.
- Figure 2, it is not clear a... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 51 |
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they w... | ACL_2017_71_review | ACL_2017 | -The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version.
-README file for the dataset [Authors committed to add README file] - General Discussion: - Sect... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 56 |
- Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature r... | ACL_2017_483_review | ACL_2017 | - 071: This formulation of argumentation mining is just one of several proposed subtask divisions, and this should be mentioned. For example, in [1], claims are detected and classified before any supporting evidence is detected.
Furthermore, [2] applied neural networks to this task, so it is inaccurate to say (as is cl... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 64 |
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they e... | ARR_2022_215_review | ARR_2022 | 1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they e... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 65 |
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work. | ARR_2022_121_review | ARR_2022 | 1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would ce... | 2 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 66 |
2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method. | gybvlVXT6z | EMNLP_2023 | 1. I feel that paper has insufficiant baseline. For example, CoCoOp (https://arxiv.org/abs/2203.05557) is a widely used baseline for prompt tuning research in CLIP. Moreover, it would be nice to include the natural data shift setting as in most other prompt tuning papers for CLIP.
2. It would be nice to include the har... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 75 |
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. | NIPS_2020_295 | NIPS_2020 | 1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. 2. Some methods use epochs and pretrain epochs as 200, while the repo... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 77 |
2: the callout to table 5 should go to table 3, instead. Page 7, section 5, last par.: figure 6 callout is not directing properly | ICLR_2023_977 | ICLR_2023 | the evaluation section has 2 experiments, but only 2 very insightful detailed examples. The paper can use a few more examples to illustrate more differences of the output sequences. This would allow the reader to internalize how the non-monotonicity in a deeper way.
Questions: In details, how does the decoding algorith... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 83 |
2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform Dei... | ICLR_2022_1794 | ICLR_2022 | 1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work.
2 Mo... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 85 |
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3). | ICLR_2022_3352 | ICLR_2022 | + The problem studied in this paper is definitely important in many real-world applications, such as robotics decision-making and autonomous driving. Discovering the underlying causation is important for agents to make reasonable decisions, especially in dynamic environments.
+ The method proposed in this paper is inte... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 86 |
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retriev... | NIPS_2017_356 | NIPS_2017 | ]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my fo... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 92 |
- As mentioned in the previous question, the distribution of videos of different lengths within the benchmark is crucial for the assessment of reasoning ability and robustness, and the paper does not provide relevant explanations. The authors should include a table showing the distribution of video lengths across the d... | BTr3PSlT0T | ICLR_2025 | - I express skepticism about whether the number of videos in the benchmark can achieve a robust assessment. The CVRR-ES benchmark includes only 214 videos, with the shortest video being just 2 seconds. Upon reviewing several videos from the anonymous link, I noticed a significant proportion of short videos. I question ... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 100 |
8.L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations... | NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should menti... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 101 |
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance se... | ICLR_2023_3203 | ICLR_2023 | 1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance se... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 103 |
1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13? | ICLR_2023_2283 | ICLR_2023 | 1. 1. The symbols in Section 4.3 are not very clearly explained. 2. This paper only experiments on the very small time steps (e.g.1、2) and lack of some experiments on slightly larger time steps (e.g. 4、6) to make better comparisons with other methods. I think it is necessary to analyze the impact of the time step on th... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 104 |
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating mo... | eI6ajU2esa | ICLR_2024 | - This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating mo... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 105 |
4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) . | ICLR_2021_863 | ICLR_2021 | Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the il... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 106 |
8: s/expensive approaches2) allows/expensive approaches,2) allows/ p.8: s/estimates3) is/estimates, and3) is/ In the references: Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in... | ICLR_2021_872 | ICLR_2021 | The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a ... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 107 |
3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? | NIPS_2016_339 | NIPS_2016 | weakness of the model. How would the values in table 1 change without this extra assumption? 3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? 4. An answer ... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 108 |
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Full... | 4N97bz1sP6 | ICLR_2024 | 1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Full... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 121 |
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. | NIPS_2020_1454 | NIPS_2020 | - Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. - Claims to be SOTA on three datasets, but this does not seem to be the case. Does not evaluate o... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 122 |
- "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. | NIPS_2018_25 | NIPS_2018 | - My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the c... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 123 |
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exe... | NIPS_2021_1222 | NIPS_2021 | Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustne... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 130 |
1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new. | NIPS_2018_476 | NIPS_2018 | Weakness] 1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new. 2) Theoretical proofs of existing algorithm might be regarded as some incremental contributions. 3) Experiments are somewhat weak: 3-1) I was wondering why Authors conducted experiments with lam... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 134 |
2. Â Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \... | NIPS_2019_1131 | NIPS_2019 | 1. There is no discussion on the choice of "proximity" and the nature of the task. On the proposed tasks, proximity on the fingertip Cartesian positions is strongly correlated with proximity in the solution space. However, this relationship doesn't hold for certain tasks. For example, in a complicated maze, two nearby ... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 138 |
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted. | 5UW6Mivj9M | EMNLP_2023 | 1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted.
2) Relatedly, it was hard to discern what was novel in the paper and what had already been tried by others.
3) Since the improvement in number... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 139 |
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function,... | zpayaLaUhL | EMNLP_2023 | - Limited Experiments
- Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function,... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 142 |
5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper. | X4ATu1huMJ | ICLR_2024 | **Overall comment**
The paper discusses evaluating TTA methods across multiple settings, and how to choose the correct method during test-time. I would argue most of the methods/model selection strategies that are discussed in the paper are not novel and/or existed before, and the paper does not have a lot of algorithm... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 144 |
2. The authors need to show a graph showing the plot of T vs number of images, and Expectation(T) over the imagenet test set. It is important to understand whether the performance improvement stems solely from the network design to exploit spatial redundancies, or whether the redudancies stem from the nature of ImageNe... | NIPS_2020_204 | NIPS_2020 | 1.The authors have done a good job with placing their work appropriately. One point of weakness is insufficient comparison to approaches that aim to reduce spatial redudancy, or make the networks more efficient specifically the ones skipping layers/channels. Comparison to OctConv and SkipNet even for a single datapoint... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 157 |
- It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down. | NIPS_2017_114 | NIPS_2017 | - More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios.
- The CIFAR-10 results are a little disappointing with respect to temporal ensembles (although the results are comparable and the proposed approach has other advantages)
- An evaluation on the more challenging... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 163 |
2. To utilize a volumetric representation in the deformation field is not a novel idea. In the real-time dynamic reconstruction task, VolumeDeform [1] has proposed volumetric grids to encode both the geometry and motion, respectively. | NIPS_2022_728 | NIPS_2022 | Weakness 1. The setup of capturing strategy is complicated and is not easy for applications in real life. To initialize the canonical space, the first stage is to capture the static state using a moving camera. Then to model motions, the second stage is to capture dynamic states using a few (4) fixed cameras. Such a 2-... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 169 |
4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately. | NIPS_2018_707 | NIPS_2018 | weakness of the paper is the lack of experimental comparison with the state of the art. The paper spends whole page explaining reasons why the presented approach might perform better under some circumstances, but there is no hard evidence at all. What is the reason not to perform an empirical comparison to the joint be... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 171 |
3. The innovations of network architecture design and constraint embedding are rather limited. The authors discussed that the performance is limited by the performance of the oracle expert. | NIPS_2022_69 | NIPS_2022 | 1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated. 2. The experimental results did not show that this work model obviously outperforms other variant comparison algorithms/models. 3. The innovations of network... | 3 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 172 |
12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct? I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraph... | ICLR_2022_3205 | ICLR_2022 | This method trades one intractible problem for another: it requires the learning of cross-values v e ′ ( x t ; e )
for all pairs of possible environments e , e ′
. It is not clear that this will be an improvement when scaling up.
At a few points the paper introduces approximations, but the gap to the true value and the... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 177 |
1) there is a drop of correlation after a short period of training, which goes up with more training iterations; | NIPS_2022_1770 | NIPS_2022 | Weakness: There are still several concerns with the finding that the perplexity is highly correlated with the number of decoder parameters.
According to Figure 4, the correlation decreases as top-10% architectures are chosen instead of top-100%, which indicates that the training-free proxy is less accurate for paramete... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 179 |
2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it. | ICLR_2023_650 | ICLR_2023 | 1.One severe problem of this paper is that it misses several important related work/baselines to compare[1,2,3,4], either in discussion [1,2,3,4]or experiments[1,2]. This paper addresses to design a normalization layer that can be plugged in the network for avoiding the dimensional collapse of representation (in interm... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 181 |
1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2]. | 3vXpZpOn29 | ICLR_2025 | It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; ... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 182 |
2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on ... | ICLR_2021_1504 | ICLR_2021 | W1) The authors should compare their approach (methodologically as well as experimentally) to other concept-based explanations for high-dimensional data such as (Kim et al., 2018), (Ghorbani et al., 2019) and (Goyal et al., 2019). The related work claims that (Kim et al., 2018) requires large sets of annotated data. I ... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 184 |
3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. | NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive ex... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 188 |
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). | NIPS_2017_143 | NIPS_2017 | For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 189 |
- The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. | NIPS_2018_430 | NIPS_2018 | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whethe... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 191 |
- Multiscale modeling:- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly. | 8HG2QrtXXB | ICLR_2024 | - Source of Improvement and Ablation Study:
- Given the presence of various complex architectural choices, it's difficult to determine whether the Helmholtz decomposition is the primary source of the observed performance improvement. Notably, the absence of the multi-head mechanism leads to a performance drop (0.1261 -... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 195 |
3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1. Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on una... | ICLR_2021_1716 | ICLR_2021 | Results are on MNIST only. Historically it’s often been the case that strong results on MNIST would not carry over to more complex data. Additionally, at least some core parts of the analysis does not require training networks (but could even be performed e.g. with pre-trained classifiers on ImageNet) - there is thus n... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 198 |
- The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numb... | NIPS_2018_985 | NIPS_2018 | Weakness: - One drawback is that the idea of dropping a spatial region in training is not new. Cutout [22] and [a] have been explored this direction. The difference towards previous dropout variants is marginal. [a] CVPR'17. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. - The improvement ove... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 202 |
1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date? | ARR_2022_209_review | ARR_2022 | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
2. There isn't one clear aggregation strategy that gives consistent performance gains across all tasks. S... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 206 |
1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are ... | K98byXpOpU | ICLR_2024 | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are ... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 207 |
3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset ... | NIPS_2021_386 | NIPS_2021 | 1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 208 |
35. No.3. 2021. Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included. Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method. | ICLR_2023_1599 | ICLR_2023 | of the proposed method are listed as below:
There are two key components of the method, namely, the attention computation and learn-to-rank module. For the first component, it is a common practice to compute importance using SE blocks. Therefore, the novelty of this component is limited.
Some important SOTAs are missin... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 224 |
1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise inj... | NIPS_2021_121 | NIPS_2021 | Weakness] 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 229 |
- The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text. | ICLR_2023_1765 | ICLR_2023 | weakness, which are summarized in the following points:
Important limitations of the quasi-convex architecture are not addressed in the main text. The proposed architecture can only represent non-negative functions, which is a significant weakness for regression problems. However, this is completed elided and could be ... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 230 |
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might he... | NIPS_2017_104 | NIPS_2017 | ---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 234 |
1. Symbols are a little bit complicated and takes a lot of time to understand. | NIPS_2018_461 | NIPS_2018 | 1. Symbols are a little bit complicated and takes a lot of time to understand. 2. The author should probably focus more on the proposed problem and framework, instead of spending much space on the applications. 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If thi... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 236 |
1. The introduction to orthogonality in Part 2 could be more detailed. | oKn2eMAdfc | ICLR_2024 | 1. The introduction to orthogonality in Part 2 could be more detailed.
2. No details on how the capsule blocks are connected to each other.
3. The fourth line of Algorithm 1 does not state why the flatten operation is performed.
4.The presentation of the α-enmax function is not clear.
5. Eq. (4) does not specify why Ba... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"3",
"3",
"3"
]
} | 3 | gold | 240 |
1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. | NIPS_2022_2315 | NIPS_2022 | Weakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid. | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 242 |
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the in... | RnYd44LR2v | ICLR_2024 | - Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the in... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 245 |
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. | NIPS_2020_125 | NIPS_2020 | 1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. 2. Similar to above, it would be good to provide more ... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 259 |
2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison? Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison: Guo Lu, et al., "... | ICLR_2022_1522 | ICLR_2022 | Weakness:
The overall novelty seems limited since the instance-adaptive method is from existing work with no primary changes. Here are some main questions and concerns:
1). How many optimization steps are used to produce the final reported performance in Figure.1 as well as in some other figs and tables?
2). The propos... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 260 |
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defin... | ARR_2022_59_review | ARR_2022 | - If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defin... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 268 |
3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3. | ICLR_2023_2396 | ICLR_2023 | 1. Lack of the explanation about the importance and the necessity to design deep GNN models . In this paper, the author tries to address the issue of over-smoothing and build deeper GNN models. However, there is no explanation about why should we build a deep GNN model. For CNN, it could be built for thousands of layer... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 269 |
6: How many topics were used? How did you get topic-word parameters for this "real" dataset? How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words. | ICLR_2022_1872 | ICLR_2022 | I list 5 concerns here, with detailed discussion and questions for the authors below
W1: While theorems suggest "existence" of a linear transformation that will approximate the posterior, the actual construction procedure for the "recovered topic posterior" is unclear
W2: Many steps are difficult to understand / replic... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 270 |
3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the gener... | NIPS_2020_1592 | NIPS_2020 | Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are ne... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 275 |
- The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memor... | NJUzUq2OIi | ICLR_2025 | I found the proposed idea, experiments, and analyses conducted by the authors to be valuable, especially in terms of their potential impact on low-resource scenarios. However, for the paper to fully meet the ICLR standards, there are still areas that need additional work and detail. Below, I outline several key points ... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 287 |
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. | ICLR_2022_2425 | ICLR_2022 | 1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. 2)Clarity: Since the coreset construction algorithm is built up on previous works, a reader without the background in literature... | 4 | {
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 294 |
End of preview. Expand in Data Studio
RevUtil: Measuring the Utility of Peer Reviews for Authors
📚 Overview
Providing constructive feedback to authors is a key goal of peer review. To support research on evaluating and generating useful peer review comments, we introduce RevUtil, a dataset for measuring the utility of peer review feedback.
RevUtil focuses on four main aspects of review comments:
- Actionability – Can the author act on the comment?
- Grounding & Specificity – Is the comment concrete and tied to the paper?
- Verifiability – Can the statement be checked against the paper?
- Helpfulness – Does the comment assist the author in improving their work?
🧑🔬 RevUtil Human
- 1,430 review comments from real peer reviews.
- Each comment is annotated independently by three human raters.
- Labels are provided as
"gold"(3/3 agreement),"silver"(2/3), or"none"(no agreement).
Key columns:
| Column | Description |
|---|---|
paper_id |
ID of the reviewed paper |
venue |
Conference or journal name |
focused_review |
Full review (weakness + suggestion sections) |
review_point |
Individual review comment being evaluated |
id |
Unique ID for the review point |
batch |
Annotation batch/study identifier |
ASPECT |
Dictionary with annotators and their labels |
ASPECT_label |
Majority label (if available) |
ASPECT_label_type |
"gold", "silver", or "none" |
🚀 Usage
You can load the datasets directly via 🤗 Datasets:
from datasets import load_dataset
# Human annotations
human = load_dataset("boda/RevUtil_human")
# Synthetic annotations
synthetic = load_dataset("boda/RevUtil_synthetic")
📎 Citation
@inproceedings{sadallah-etal-2025-good,
title = "The Good, the Bad and the Constructive: Automatically Measuring Peer Review{'}s Utility for Authors",
author = {Sadallah, Abdelrahman and
Baumg{\"a}rtner, Tim and
Gurevych, Iryna and
Briscoe, Ted},
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.1476/",
doi = "10.18653/v1/2025.emnlp-main.1476",
pages = "28979--29009",
ISBN = "979-8-89176-332-6"
}
- Downloads last month
- 190