tag:blogger.com,1999:blog-7037761021913273375.post7817017119757804473..comments2023-02-13T20:27:45.657-08:00Comments on GrokInFullness: Publication Bias In Climate ScienceJubal Harshawhttp://www.blogger.com/profile/11196096815699469262noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-7037761021913273375.post-1358164006562214852020-11-22T13:30:41.723-08:002020-11-22T13:30:41.723-08:00You missed something important:
The 2 papers are d...You missed something important:<br />The 2 papers are dealing with completely different aspects of climate change!<br />There should be no overlap. The second article does not include anything about climate sensitivity of CO2.<br />Therefore the results from the first paper stand unchallenged!<br /><br />First one is about climate sensitivity of CO2 (as you explained). The second one is about "climate change in ocean systems".<br />Quote from data collection section: "...identified articles for experimental results pertaining to climate change in ocean ecosystems. The search was performed with no restrictions on publication year, using different combinations of the terms: (acidification* AND ocean*) OR (acidification* AND marine*) OR (global warming* AND marine*) OR (global warming* AND ocean*) OR (climate change* AND marine* AND experiment*) OR (climate change* AND ocean* AND experiment*)."<br /><br />I think you were confused by the 1.6 estimate in the second paper (roughly in line with first one). That's not the climate sensitivity. It is a pure statistical measure, Hedge's d. Maxhttps://www.blogger.com/profile/16866593838760602133noreply@blogger.comtag:blogger.com,1999:blog-7037761021913273375.post-8304379217527489642019-09-22T10:03:51.831-07:002019-09-22T10:03:51.831-07:00Thanks for this. It was quite helpful in my own a...Thanks for this. It was quite helpful in my own analysis.<br /><br />I did a simple evaluation of GCM accuracy over a 15 year forecast horizon and found they systematically over predict warming.<br /><br />https://possibleinsight.com/climate-models-as-prediction-algorithms/<br /><br />Then I cataloged the other lines of evidence against GCM accuracy:<br /><br />https://possibleinsight.com/evidence-against-climate-model-accuracy/<br /><br />I used your post as the capstone. It helps everything make sense. I share your frustration with the title of the second article. For a field that seems fixated on "consensus", it seems like the title should have been "Consensus of 1042 estimates from 120 studies puts climate sensitivity at 1.6 deg C"Kevin Dickhttps://www.blogger.com/profile/10212718353789153299noreply@blogger.comtag:blogger.com,1999:blog-7037761021913273375.post-72343335060254756712019-09-22T09:59:24.955-07:002019-09-22T09:59:24.955-07:00Thanks for this! It really helped when I was doin...Thanks for this! It really helped when I was doing my own analysis.<br /><br />I did a simple statistical evaluation of climate model accuracy and found that models systematically over predict warming, even on a 15 year time horizon where the model builders actually experienced some of those 15 years before doing the runs.<br /><br />https://possibleinsight.com/climate-models-as-prediction-algorithms/<br /><br />Then I went looking and found several other lines of evidence against climate model accuracy:<br /><br />https://possibleinsight.com/evidence-against-climate-model-accuracy/<br /><br />I used your analysis of publication bias as the capstone. It brought everything together. I share your annoyance at the second article's title. For a field that makes a big deal about "consensus", it seems like the title should have been “Consensus of 1042 estimates from 120 studies puts climate sensitivity at 1.6 deg C."Kevin Dickhttps://www.blogger.com/profile/10212718353789153299noreply@blogger.comtag:blogger.com,1999:blog-7037761021913273375.post-66656248010543576882018-01-14T16:38:04.908-08:002018-01-14T16:38:04.908-08:00@Piyo, here's one explanation: you're work...@Piyo, here's one explanation: you're working with multiple ways of measuring publication bias. In the paragraph you cite, Harlos and colleages write about a specific way of measuring bias: under-reporting of small, low-powered results. Michaels and Reckova/Irsova find this, but Harlos and colleagues say they do not find it.<br /><br />Harlos and colleagues find other patterns: small effects show up less often in prestigious journals and abstracts. This all sounds like the same thing at first, but I suspect Harlos and colleagues regard publication venue and emphasis in the abstract as less serious than outright failure to publish the small numbers / negative results, and I'm inclined to agree. I don't think it's realistic to ask people to write the abstract using a randomly chosen effect from the paper. The abstract will usually, by design, highlight findings with larger implications. What do you think?Anonymoushttps://www.blogger.com/profile/18282559994474885302noreply@blogger.comtag:blogger.com,1999:blog-7037761021913273375.post-42319457989781805762017-10-19T21:48:51.357-07:002017-10-19T21:48:51.357-07:00I don’t even understand this, and to the extent I ...I don’t even understand this, and to the extent I do, it seems like a total lie? How am I to parse this?<br /><br />Our meta-analysis did not find evidence of small, statistically non-significant results being under-reported in our sample of climate change articles. This result opposes findings by Michaels (2008) and Reckova and Irsova (2015), which both found publication bias in the global climate change literature, albeit with a smaller sample size for their meta-analysis and in other sub-disciplines of climate change science.<br />Piyohttps://www.blogger.com/profile/05775346125247800627noreply@blogger.com