Peer Review History
Original SubmissionSeptember 8, 2023 |
---|
PCLM-D-23-00196 Matilda v1.0: An R package for probabilistic climate projections using a reduced complexity climate model PLOS Climate Dear Dr. Brown, Thank you for submitting your manuscript to PLOS Climate. After careful consideration, we feel that it has merit but does not fully meet PLOS Climate’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 23 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at climate@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pclm/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. We look forward to receiving your revised manuscript. Kind regards, Steven L. Forman Academic Editor PLOS Climate Journal Requirements: 1. We ask that a manuscript source file is provided at Revision. Please upload your manuscript file as a .doc, .docx, .rtf or .tex. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments (if provided): This is a well written manuscript that presents a new R-software package to enhance carbon cycle simulations with the Hector-based model. This software has the functionality to weight various simulations against data from historic records. In turn, this package can combine weighted scores of simulations to offer amalgamation of temperature and carbon projections. This paper does not present new results but shares a novel computational approach that could generate synthetic data and insights on the carbon cycle. The topic of this manuscript and the release of open R-based code is well within the purview of PLOSClimate. I concur with the two reviewers that just minor revision are needed on this manuscript. I find this paper to be clear and carefully written with an appropriate test of this R-based software package. I apologize for the lengthy review time, but obtaining credible reviews, was a challenge. My reading of the manuscript and reviewers offer additional comments for the authors' consideration. 1. This combinational computational approach may supplant is Bayesian analysis. Please discuss this software in reference to Bayesian analysis and as one reviewer indicates add explanation on the posterirori projections. Why is this software package better or worse than standard Bayesian analysis? 2. Model weighting is an appropriate inclusion for this software. However, how one weights temp. and CO2 output can lead to various scenarios. There needs to be some guidance from the authors on the weighting scheme for the less skillful or initial user. The weighting schemes and basis in further model use may be an Achilles heel. Please consider carefully the comments and suggestions from reviewer 2. 3. A deeper discussion on the biases associated with the error term in data particularly from the 19th century and lower quality data. Please discuss how underestimating errors can lead to less constrained results and biases. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does this manuscript meet PLOS Climate’s publication criteria? Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe methodologically and ethically rigorous research with conclusions that are appropriately drawn based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS Climate does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you very much for submitting manuscript to PLOS Climate. Work overall looks interesting, here are few concerns: 1. How the developed package is novel and robust from the existing ones? 2. How the package is helpful in uncertainty quantification and reduction? 3. Is this package computationally efficient? Reviewer #2: The manuscript describes an R software package to facilitate running simulations with the Hector carbon cycle-climate model. The software also enables users to score simulations against historical observations using either prepackaged standard metrics or new user-defined metrics. Users can combine simulations as weighted by their scores to synthesize projections across the perturbed parameter ensembles. Regarding scope - As the authors clearly state within the manuscript itself, this work does not constitute new scientific results. However, this work describes software and tools that will enable users to generate new scientific results. Based on the PLOS Climate scope, this is well within the scope for this journal and will be of broad interest to its readers (specifically, “... we consider systematic reviews and meta-analyses, qualitative research, replication studies, submissions reporting null and negative results, and submissions describing methods, software, databases and tools.”). The manuscript is well-organized and well-written. The conclusions (primarily around the utility of the software) follow easily from the demonstrations and documentation that has been provided. I don’t think my comments will require much beyond additional explanations and caveats, so I recommend minor revisions. General comments: model weighting L89-92 & Sec 2.6.1 – Can the authors comment on how their model weighting approaches differ from a formal Bayesian calibration? For example, it’s clear how a posteriori projections are generated here, and I can see the relationship between this approach and a traditional Bayesian one. But, I don’t see parameters’ posterior distributions. Is that something that users can generate as well? Relatedly - The Bayesian model weighting scheme strikes me as basically doing Bayesian model averaging (eg, Eq 6). Is there some way in which this is distinct from BMA? Some discussion of BMA would be appropriate to introduce that what users are doing here is well-established. Sec. 2.7 – I appreciate and support the inclusion of the model weighting here, but note that model averaging can be a fraught endeavor. In particular, based on how we determine the model weights. That is, weights determined using (eg) temperature output vs CO2 output vs both can end up quite different. Can the authors give their perspective, and some appropriate caveats, on this? Sec. 2.6.1 – Are model-data residuals assumed to all be independent of one another? There’s evidence that accounting for autoregressive residuals is important (e.g., Ruckert et al 2017; https://doi.org/10.1007/s10584-016-1858-z). Potentially over-simplifying the error structure can risk biasing projections and posterior model inference. The Vega-Westoff et al (2019; https://doi.org/10.1029/2018EF001082) reference that is cited in the present manuscript has a bit of discussion on this and I think also is using a first-order autoregressive model for this reason. Sec. 2.6.1 – sigma – I agree that sigma ought to be chosen based on our observational data, and there are situations where some hand-tuning that departs from the data can be warranted. Should sigma be time-varying as well? Again, the Ruckert et al paper above also found that accounting for heteroskedasticity in the residuals is important (and noted in the Vega-Westoff paper). This can have large effects when, for example, our data for global mean surface temperature from the 1800s have several times the magnitude of uncertainties of more recent temperature data. When I checked the ‘no” box for Statistical Analysis Rigor, I’m thinking about some stronger caveats/model facility for AR errors there. I don’t think this is necessarily _wrong_, just that it merits further dicsussion and/or additional software features. L447-448 and thereabouts – This is an important caveat related to my cautious optimism above about including the model weighting schemes. My sense is that users are going to need stronger guardrails against misusing the model weighting methods. Can the authors provide insight into when different weighting methods would be appropriate? For example, the ramp scoring resembles simple precalibration, whereas the Bayesian scoring is analogous to BMA. For what situations of models, quantity and quality of data, constraint of future projections, etc. might each be most appropriate? Specific comments: The code installs and runs nicely for me in R as I followed along with the code blocks in the manuscript. In block 5, I found the reporting of all the individual parameter sets to stdout to be a bit verbose/cluttered (“setting S to … setting alpha to….”). This could probably be tidied up in some way, maybe with a “verbose” argument for `iterate_model` and `set_params` and/or by putting all of the parameters in a row? Sec. 2.2 – This is a journal formatting question – Will URLs be provided in a Data or Software Availability section at the end of the paper? And is there a link to the full documentation? I see in the Github repo where all the Rd files live (in /man/), but adding some detail to the top-level README, including a link to the formatted documentation, would round this out nicely. I don’t know to what extent such functionality of documentation is requisite for publication, as compared to (say) GMD, JAMES, or JOSS. L172 – “uniform multivariate” – My understanding here is that the marginal prior distributions for the model parameters are assumed to be normal or log-normal, and all independent of one another. I think independence among the parameters is what this “uniform multivariate” is referring to, that any given value of one parameter is equally likely to be with any other value of another parameter? Emphasizing independence could be useful here, if my understanding is correct. If I’m wrong here, apologies in advance, and there may be some other clarification that would be useful. L250-251 – While this isn’t done, is this a capability in Matilda? I may have missed it, but I don’t see where the reference years for the pre-industrial period are defined. Does it match the IPCC, 1850-1900? This is worth stating somewhere in the manuscript. In the software, is there a place where users can/must specify reference periods for outputs? Or is there some other way that the software is making sure that the reference periods for model and data match? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public. For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
Matilda v1.0: An R package for probabilistic climate projections using a reduced complexity climate model PCLM-D-23-00196R1 Dear Dr. Brown, We are pleased to inform you that your manuscript 'Matilda v1.0: An R package for probabilistic climate projections using a reduced complexity climate model' has been provisionally accepted for publication in PLOS Climate. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow-up email from a member of our team. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they'll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact climate@plos.org. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Climate. Best regards, Steven L. Forman Academic Editor PLOS Climate *********************************************************** Well done response to reviews and further contextual information on this new data analysis and modeling approach. Reviewer Comments (if any, and for reference): |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .