Peer Review History
Original SubmissionJanuary 17, 2023 |
---|
Dear Dr Zhang, Thank you very much for submitting your Methods entitled 'SparsePro: an efficient fine-mapping method integrating summary statistics and functional annotations' to PLOS Genetics. The manuscript was fully evaluated at the editorial level and by independent peer reviewers. The reviewers appreciated the attention to an important problem, but raised some substantial concerns about the current manuscript. The main concern raised by all reviewers is the strong similarity between the proposed variational inference framework and the SuSiE model (Wang et al., 2020). Reviewers are unsure whether the subtle changes made to SuSiE in the proposed framework are a misinterpretation of the original model or have well-justified reasons. To address this concern, the editors suggest submitting a new manuscript that uses the current work as a starting point. This new manuscript should explicitly connect the proposed framework to the SuSiE model and algorithm, explain the motivation for modifications made to SuSiE before incorporating annotations, and provide comments on why these changes are necessary. Based on the reviews, we will not be able to accept this version of the manuscript, but we would be willing to review a much-revised version. We therefore suggest major revision with these important details clarified. We cannot, of course, promise publication at that time. Should you decide to revise the manuscript for further consideration here, your revisions should address the specific points made by each reviewer. We will also require a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. If you decide to revise the manuscript for further consideration at PLOS Genetics, please aim to resubmit within the next 60 days, unless it will take extra time to address the concerns of the reviewers, in which case we would appreciate an expected resubmission date by email to plosgenetics@plos.org. If present, accompanying reviewer attachments are included with this email; please notify the journal office if any appear to be missing. They will also be available for download from the link below. You can use this link to log into the system when you are ready to submit a revised version, having first consulted our Submission Checklist. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please be aware that our data availability policy requires that all numerical data underlying graphs or summary statistics are included with the submission, and you will need to provide this upon resubmission if not already present. In addition, we do not permit the inclusion of phrases such as "data not shown" or "unpublished results" in manuscripts. All points should be backed up by data provided with the submission. While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. PLOS has incorporated Similarity Check, powered by iThenticate, into its journal-wide submission system in order to screen submitted content for originality before publication. Each PLOS journal undertakes screening on a proportion of submitted articles. You will be contacted if needed following the screening process. To resubmit, use the link below and 'Revise Submission' in the 'Submissions Needing Revision' folder. We are sorry that we cannot be more positive about your manuscript at this stage. Please do not hesitate to contact us if you have any concerns or questions. Yours sincerely, Gao Wang Guest Editor PLOS Genetics Xiaofeng Zhu Section Editor PLOS Genetics Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: Zhang et al present their extension of the SuSiE model to incorporate functional annotations. SparsePro prevents irrelevant annotations from contaminating the model by testing the functional annotations before integrating them into the model. It estimates the enrichment coefficients within the IBSS algorithm. Moreover, it estimates some hyper-parameters outside the IBSS algorithm to avoid convergence issue. The manuscript presents simulation results to support the method. They also show application result on UKBiobank traits. It is well written. As SuSiE becomes a popular approach for fine-mapping, it is desirable to have a version supporting functional annotations. Major comments: 1. The SparsePro model is essentially the SuSiE model with functional annotations, with some changes to the prior and residual variance estimation. As most readers are familiar with SuSiE, I suggest using the terminology and concepts from SuSiE in the manuscript. For example, replacing 'effect group' with 'credible set'; using similar credible set definition as in SuSiE. The derivations in supplementary section 1 before annotation estimation are same as in SuSiE, but the SuSiE paper is not cited. 2. The statement on line 214 said 'in SuSiE, Bayes factors were normalized by sum of Bayes factors … which increased power for identifying causal variants.' This statement is not accurate. The softmax is same as normalizing weighted Bayes Factors (weighted by the prior probability \\pi_g). So the log probability in page 37 3rd equation is same as log(\\pi_g * Bayes Factor). This is not the cause for the SparsePro higher power in the simulation. The higher power from SparsePro- may caused by estimating hyper-parameters outside the iterative algorithm, and different PIP definition. 3. The PIP for each variant is computed as max among groups. Why not computing using 1-\\prod_k(1-\\gamma_kg), which is the theoretical definition of PIP? 4. From line 310-313, it seems that the 95% causal set (1 set containing all causal variants) is used in the set-level comparisons, not 95% credible sets as defined in SuSiE. Is there any reason not using credible sets for comparisons? SparsePro, SuSiE and FINEMAP all output 95% credible sets. It is unnecessary to combine them into one causal set. I suggest conducting set-level comparisons using 95% credible sets (coverage / power / size of each single 95% credible set). 5. The method estimates the enrichment coefficient using 'one-at-a-time' coordinate ascent. Is this same as jointly estimating the coefficients? Is it possible to estimate them jointly? For your reference, TORUS (Wen, X., 2016.) estimates the coefficients jointly using EM. Minor comments: 1. The credible sets from SuSiE has a purity > 0.5 filter by default. Is the same purity filter applied in other methods? If not, the comparison is unfair. 2. The simulation has a per-variant heritability 10^(-4). Does this mean '--simu-hsq' is K * 10^(-4) in GCTA simulation? Or does it mean the simulated causal variants have similar effect size? 3. The relationship between the simulation parameter W and the relative enrichment vector w in section 4.1 page 10 is unclear. 4. Line 242 'causal effect sizes may vary across different subpopulations', where does the subpopulation come from? This is just single study fine-mapping. 5. Do the computational times in figures include estimating \\tao_\\beta, tao_y and testing of functional annotations? 6. Any reason to use log20 for entropy difference cutoff? Any reason to use 10^(-5) p value threshold for G-test? 7. In the genome-wide simulation, the results in the 1-MB center of each 3-MB window are considered. But the signal could also at/close to the 1-MB center boundary. How to analyze these signals? 8. What's the coefficient w scale in Fig S7? Reviewer #2: Comment, Zhang et al. proposed an interesting enhancement of the SuSiE model proposed by Wang et al. JRSSB 2021 to perform informed fine-mapping using side information/annotation. The authors show some convincing evidence that their approach (SparsePro) has greater power than some competing methods, such as SuSiE+Polyfun. Overall the approach is sound, and I am mostly positive regarding the manuscript. The level of detail provided by the authors is satisfactory. Still, some typos are disrupting the overall quality of the manuscript as well as some statements that need to be clarified. Major comment: 1) While most of the manuscript reads quite well, there is a couple of sentences that do not flow well or are grammatically incorrect, e.g., in the last paragraph of the introduction, the authors wrote "In line with the idea of grouping correlated variants together into effect groups, we proposed Sparse Projections to Causal Effects (SparsePro) to further improve fine-mapping efficiency and accuracy. First, within each effect group, we additionally incorporate" in this case, first needs to be followed by a sentence with an active verb before additionally. I am not a native English speaker, and I am aware that it can be hard to draft a manuscript. However, before considering the manuscript for acceptance, the authors need to proofread the manuscript to correct some of these problematic sentences. 2) The authors wrote: " Second, we use an efficient variational inference algorithm to further simplify the intuitive algorithm proposed in SuSiE and improve computation efficiency." I have had a close look at the algorithm proposed by Zhang. While the author does not explicitly compute the marginal Bayes factor for each variant, given that we do not use any annotation, the coordinate ascend seems very similar to the one proposed by Wang et al. As Wang and colleague provided the complexity of each coordinate ascend update, I think it would be interesting that the authors provide the complexity of each coordinate ascend update of SparsePro. This would make it more explicit that the gain in computational speed is not only due to the implementation. Because for the moment, it is not clear to me how this approach differs from the IBSS. Perhaps the author could elaborate on the computational complexity of their VA of the Single Effect Model proposed by Wang et al. 3) I would be interested in seeing a set of simulations in which the annotations are misspecified/measured with noise and potential bias toward non-causal SNPs. While substantial efforts are made to get high-quality annotations, it is not unlikely that many of those are poorly measured or biased. I would be interested in seeing some simulations in which the author would consider poorly measured annotations and see if that could generate some low-coverage credible set. In general, my overall question is that given that you have some annotations, is it worth it to include them in a fine mapping procedure, or could that potentially "harm" your results. Could you generate a new set of simulations in which annotations are measured with noise? Furthermore, could try to come up with a set of simulations that could lead to problematic coverage due to annotations. For example, suppose that you use the following annotation (that is made to be problematic) in a case where you consider a model with K SNP. Consider the following K annotation for each non-casual SNP set annotation k to its correlation with causal SNP k and for the causal SNP set each of their K annotations by sampling a random number 0 and 1 Minor: 1) There is a type after equation 2 in the supplement for the condition variational approximation. It is written $s_{kg=1}$ whereas it should be $s_{kg}=1$; please go through the equation to correct the other typos 2) The equation below, "Therefore, the posterior probability of the gth variant being causal in the kth effect group can be estimated as:" seems somewhat not correct. The input of in the softmax function is a scalar, whereas it should be a vector. The posterior probability of the gth variant being causal in the kth effect group should be the g component of this softmax. 3) In the SuSiE-rss manuscript, Zou and colleagues spend a substantial amount of work dealing with problematic LD. I would be interested to explicitly say what is implemented in SparsePro to circumvent problems related to LD matrices that are not full rank or allele flipping problem 4) could you show if the Gtest used for testing the annotation is correctly controlling the type I error Reviewer #3: The authors propose a method for estimating the hyperparameters of the sum of single effects regression. In particular they leverage functional annotation enrichments to specify informative prior inclusion probabilities for different variants, and leverage heritability estimates to set the effect size and residual variance hyper-parameters. By incorporating this enrichment information they are able to demonstrate improvement over fine-mapping methods that either do not leverage functional annotations, or leverage functional annotations through different means (e.g. Polyfun, which computes prior inculsion probabilities given partitioned heritability estimates across a set of annotations). These are important contributions, as the selection of these hyperparameters can greatly influence the calibration and power of finemapping. While I am generally positive about the work put forth here, my main reccomendations are to focus the discussion and commentary on the benefits of including functional annotations, and providing more clear rationale for the use of heritability information to set the effect and error variance hyperparameters. The authors should modify their discussion of the algorithmic differences between SuSiE and SparsePro because they do not seem accurate-- as far as I can tell, for a fixed set of hyperparameters (prior inclusion probability, effect variance, residual variance) the coordinate ascent variational inference (CAVI) employed for SparsePro and SuSiE's IBSS algorithm (which is also CAVI) are the same. **Estimating prior inclusion probabilities** (Sparspro vs Polyfun) To my understand, Polyfun provides a heuristic for forming the prior inclusion probabilites based on a heritability partition. Sparsepro takes a less heuristic approach by directly estimating enrichment of selected variants, and using those enrichments to refine the posterior approximations made by SuSiE. I very much support this approach, and the authors successfully show through extensive simulations how their method improves on the heursitic approach to estimating the prior inclusion probabilities developed in Polyfun. **Estimating variance hyperparametes** (Sparsepro vs SuSiE) SuSiE uses a variational empricial Bayes approach to estimate the effect variance and residual variance-- this just means optimizing the objective w.r.t to these hyperparameters. In contrast, Sparsepro fixes these hyperparemeters to values informed by heritability estimates. For example, the residual variance is set to 1-h2. The residual variance is fixed to 1 - h2 where h2 is a locus level heritability estimate. In contrast, a conservative approach would be to set the residual variance to 1. I'm concerned that setting the residual variance to 1-h2 may disrupt calibration of the posterior. Basically, while h2 is an estimate of the heritability in the locus, finemapped association signals will only explain a portion of this heritability. 1-h2 may be too small, and encourage the model to select variants in a way that is anti-conservative. **The variational approximations are identical** The discussion and supplemental materials emphasize the differences in computation between SuSiE's IBSS algorithm, and the variational updates derived in this paper. However, it is important to note that the variational approximation for Sparsepro and SuSiE are identical $q(\\beta, S) = \\prod_k q(\\beta_k, s_k)$. Consequently all differences in performance between SuSiE and Sparsepro- (without annotation) should be explained by (1) differences in the hyperparameters/hyperparameter estimation procedure and (2) implimentation details (e.g. convergence criteria, order of coordinate updates, etc., which may influence which local optima of the variational objective is found). In particular the following does not seem correct 214:215 "In SuSiE, Bayes factors were normalized by sum of Bayes factors across all variants while SparsePro uses the softmax function to normalize posterior probabilities which increased power for identifying causal variants". I believe the marginal log Bayes factors are equal (up to a constant) to the posterior (log) probabilities referenced here (th 4th expression in supplementary material, page 27). Thus normalizing Bayes factors is equivalent to applying softmax of the log probabilities. **Suggested Revisions** - Clarify the similarities and differences between Sparsepro and SuSiE. I believe the variational approximations are the same, but the real contribution here are annotation and heritability informed hyperparameter settings, which are an important contribution that can stand on there own. - Sparsepro- and SuSiE shoud be identical up the the setting of the effect variance and residual variance hyperparameters. Commentary attempting to explain the difference in performance between SuSiE and Sparsepo- should be revised, because at times it implies a difference in the algorithm/optimization procedure which does not seem to be correct. - Please discuss/justify the heritability based estimates for effect variance and residual variance. In particular I am concerned that useing 1-h2 fro residual variance will make the algorithm anticonservate (see above) by underestimating the residual variance in the regression problem. - Assess the calibration of PIPs for Sparsepro+ (e.g. Figure S1 in SuSiE manuscript)-- the AUC plots tell us that ranking variants by PIP is good, but it doesn't tell us that thresholding at some nominal PIP value controls the false positive rate. Good PIP callibration would go a long way in addressing my conerns about the choice of residual variance parameter. **Minor points** - Maybe a simpler enrichment analysis for the UKBB biomarkers would be (1) causal variants in this phenotype vs (2) causal variants discovered in other phenotypes. It would more clearly highlight that the enrichment of the tissue-specific annotation in the relvant biomarker is above and beyond the background level enrichment of enrichment across causal variants discovered in all phenotypes. - Were causal variants defined as the top variant per credible set or all variants in the credible set? - It is not clear to me which annotations are used in the UKBB biomarkers analysis. This should be clearly stated in 4.4 or methods. - To clarify the comparison between polyfun and Sparsepro it may be good to (1) run Sparsepro with the prior inclusion probabilities derived from polyfun and (2) fit Sparsepro with the exact same annotations used in polyfun (without screening annotations based on significance first). (1) vs Sparsepro+ would demonstrate that Sparsepro+ is making better use of the annotation information. (2) vs Sparsepro+ would emphasize the benfit of selecting annotations based on Gtest. ********** Have all data underlying the figures and results presented in the manuscript been provided? Large-scale datasets should be made available via a public repository as described in the PLOS Genetics data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No
|
Revision 1 |
Dear Dr Zhang, Thank you very much for submitting your Research Article entitled 'SparsePro: an efficient fine-mapping method integrating summary statistics and functional annotations' to PLOS Genetics. The manuscript was fully evaluated at the editorial level and by independent peer reviewers. The reviewers appreciated the attention to an important topic but identified some concerns that we ask you address in a revised manuscript. We therefore ask you to modify the manuscript according to the review recommendations. Your revisions should address the specific points made by each reviewer. In addition we ask that you: 1) Provide a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. 2) Upload a Striking Image with a corresponding caption to accompany your manuscript if one is available (either a new image or an existing one from within your manuscript). If this image is judged to be suitable, it may be featured on our website. Images should ideally be high resolution, eye-catching, single panel square images. For examples, please browse our archive. If your image is from someone other than yourself, please ensure that the artist has read and agreed to the terms and conditions of the Creative Commons Attribution License. Note: we cannot publish copyrighted images. We hope to receive your revised manuscript within the next 30 days. If you anticipate any delay in its return, we would ask you to let us know the expected resubmission date by email to plosgenetics@plos.org. If present, accompanying reviewer attachments should be included with this email; please notify the journal office if any appear to be missing. They will also be available for download from the link below. You can use this link to log into the system when you are ready to submit a revised version, having first consulted our Submission Checklist. While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please be aware that our data availability policy requires that all numerical data underlying graphs or summary statistics are included with the submission, and you will need to provide this upon resubmission if not already present. In addition, we do not permit the inclusion of phrases such as "data not shown" or "unpublished results" in manuscripts. All points should be backed up by data provided with the submission. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. PLOS has incorporated Similarity Check, powered by iThenticate, into its journal-wide submission system in order to screen submitted content for originality before publication. Each PLOS journal undertakes screening on a proportion of submitted articles. You will be contacted if needed following the screening process. To resubmit, you will need to go to the link below and 'Revise Submission' in the 'Submissions Needing Revision' folder. Please let us know if you have any questions while making these revisions. Yours sincerely, Xiaofeng Zhu Section Editor PLOS Genetics Xiaofeng Zhu Section Editor PLOS Genetics Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: Thanks for addressing the issues. I have the following follow-up questions: 1. SparsePro uses a different formulation, but the underlying model is same as SuSiE. The K effect groups are same as K single effects. So I suggest removing the discussion on 'Equivalence between the SuSiE IBSS algorithm and a paired mean field variational inference algorithm'. The main contributions of the manuscript lie in the functional annotation, hyperparameter estimation, and posterior summary. 2. FINEMAP provides credible sets in the output cred file. 3. In the supplementary note line 82, the authors said 'it might be challenging to find the appropriate threshold' for purity. However, it is important to note that the threshold for entropy (log(20)) is also arbitrary. The highly correlated variants could be more than 50 in complex regions. Does this threshold, 20, correspond to a purity level in your simulations? Could you summarize the purity for the output CSs? How does the result look like if SuSiE uses the corresponding purity filter? 4. I'm still unclear about the CSs around the boundary of central 1MB region. For a 3Mb window, it has 3 parts, left 1Mb, central 1Mb, right 1Mb. The result for the central 1Mb part is used. What about the CS with SNPs at the right end of the central 1Mb and the left end of the right 1Mb? How do you address the results? 5. Is the \\tau_\\beta same for all effect groups? SuSiE allows different effect priors for each single effect. 6. Extracting information from the large supplementary table S1 according to lines 100-107 is challenging. Consider presenting these results in a figure format or incorporating them into Figure 3 for better clarity. 7. What's the largest K used when fitting the SparsePro model in simulations and applications? What's the parameter setting for SuSiE, FINEMAP and PAINTOR? Reviewer #2: I am positive about publishing this manuscript in PLOS Genetics. I would like to apologize to the authors for having taken a long time before taking the time to read through their revision, I have tried to do it seriously when I had the time to do so. I think the authors answered my concerns, as well as the other reviewers' concerns, in a satisfactory way and put a substantial amount of work into improving the manuscript. Reviewer #3: Uploaded as attachment. ********** Have all data underlying the figures and results presented in the manuscript been provided? Large-scale datasets should be made available via a public repository as described in the PLOS Genetics data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No
|
Revision 2 |
Dear Dr Zhang, We are pleased to inform you that your manuscript entitled "SparsePro: an efficient fine-mapping method integrating summary statistics and functional annotations" has been editorially accepted for publication in PLOS Genetics. Congratulations! Before your submission can be formally accepted and sent to production you will need to complete our formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Please note: the accept date on your published article will reflect the date of this provisional acceptance, but your manuscript will not be scheduled for publication until the required changes have been made. Once your paper is formally accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you’ve already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosgenetics@plos.org. In the meantime, please log into Editorial Manager at https://www.editorialmanager.com/pgenetics/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production and billing process. Note that PLOS requires an ORCID iD for all corresponding authors. Therefore, please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. If you have a press-related query, or would like to know about making your underlying data available (as you will be aware, this is required for publication), please see the end of this email. If your institution or institutions have a press office, please notify them about your upcoming article at this point, to enable them to help maximise its impact. Inform journal staff as soon as possible if you are preparing a press release for your article and need a publication date. Thank you again for supporting open-access publishing; we are looking forward to publishing your work in PLOS Genetics! Yours sincerely, Gao Wang Guest Editor PLOS Genetics Xiaofeng Zhu Section Editor PLOS Genetics Twitter: @PLOSGenetics ---------------------------------------------------- Comments from the reviewers (if applicable): Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: Thanks for the response. I don't have any additional concerns. Reviewer #2: While I think that the points raised by the other reviewers are interesting from my perspective I still think that the manuscript is good enough for publication in PLOS Genetics. Reviewer #3: I thank the authors for addressing the issues that were raised. The authors have answered my concerns and those of the other reviews. The paper makes an important contribution of providing a way to incorporate annotations into SuSiE-style fine-mapping. I support acceptance of the paper. ********** Have all data underlying the figures and results presented in the manuscript been provided? Large-scale datasets should be made available via a public repository as described in the PLOS Genetics data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ---------------------------------------------------- Data Deposition If you have submitted a Research Article or Front Matter that has associated data that are not suitable for deposition in a subject-specific public repository (such as GenBank or ArrayExpress), one way to make that data available is to deposit it in the Dryad Digital Repository. As you may recall, we ask all authors to agree to make data available; this is one way to achieve that. A full list of recommended repositories can be found on our website. The following link will take you to the Dryad record for your article, so you won't have to re‐enter its bibliographic information, and can upload your files directly: http://datadryad.org/submit?journalID=pgenetics&manu=PGENETICS-D-23-00072R2 More information about depositing data in Dryad is available at http://www.datadryad.org/depositing. If you experience any difficulties in submitting your data, please contact help@datadryad.org for support. Additionally, please be aware that our data availability policy requires that all numerical data underlying display items are included with the submission, and you will need to provide this before we can formally accept your manuscript, if not already present. ---------------------------------------------------- Press Queries If you or your institution will be preparing press materials for this manuscript, or if you need to know your paper's publication date for media purposes, please inform the journal staff as soon as possible so that your submission can be scheduled accordingly. Your manuscript will remain under a strict press embargo until the publication date and time. This means an early version of your manuscript will not be published ahead of your final version. PLOS Genetics may also choose to issue a press release for your article. If there's anything the journal should know or you'd like more information, please get in touch via plosgenetics@plos.org. |
Formally Accepted |
PGENETICS-D-23-00072R2 SparsePro: an efficient fine-mapping method integrating summary statistics and functional annotations Dear Dr Zhang, We are pleased to inform you that your manuscript entitled "SparsePro: an efficient fine-mapping method integrating summary statistics and functional annotations" has been formally accepted for publication in PLOS Genetics! Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out or your manuscript is a front-matter piece, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Genetics and open-access publishing. We are looking forward to publishing your work! With kind regards, Zsofi Zombor PLOS Genetics On behalf of: The PLOS Genetics Team Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom plosgenetics@plos.org | +44 (0) 1223-442823 plosgenetics.org | Twitter: @PLOSGenetics |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .