On The Danger Posed By Non-Expert Critiques Published To Large Audiences

Geoffrey North, the editor of Current Biology, has written a critical editorial that questions the role of social media in science (which I strongly suggest you read before continuing). In it, he refers to blogs as “”vanity publications”,” written by those “prone to self-indulgence”. He warns that blogs can be dangerous, that their speed and virality pose a serious risk to the foundations of peer-review and the scientific process. While many were taken aback by his bold claims, I think he makes a lot of very astute arguments.

First, of course, he’s correct in saying not all blogs are bad. The case of arsenic life and Rosie Redfield may go down in history as the first great example of blogging truly blending with and supporting research, changing the way we view peer review and the overall system of science publication and communication. It validated the beliefs of many that social media was not the enemy of science but instead its under-utilized ally. Shortly after, even major journals began to see the merits of these new media platforms for research and outreach.

But Dr. Redfield is an expert in her field, and had legitimate concerns to blog about. As Geoffrey notes, this is not the case for most blogs. Any criticisms made by non-expert blogs “can of course be harmful — at the least there tends to be a “no smoke without fire” effect.” Worse, though, once critiques are heard and publicized, there is no going backward to rebuild. “Once a scientific reputation has been tainted,” Geoffrey states, “it can be hard to restore confidence.”

We all know scientists whose careers have been cut short due to unsubstantiated online critiques. So, I was disappointed that Geoffrey failed to provide a single example of a non-expert’s blog ruining a scientific career, since there surely must be many he could have drawn from. He unnecessarily weakened his overall argument by providing a clear, concise example of a blog benefitting research, peer review, and the scientific process without providing the plethora of specific counter examples that support his claims.

And he is right, after all — we should be worried about the viewpoints of non-experts, particularly those that are distributed widely. Unlike journal editorials, which of course are thoroughly researched, edited, and written by known experts in the field, “anyone can write a blog and criticize anything”. Geoffrey’s article is a clear example of the distinction between journal publications and blogs. His rich background in science blogging, the science of science communication, and social media in general makes him an ideal expert to weigh in on this issue, and his command of the literature, clear from his extensive use of citations, sets him apart from ‘self-indulgent’ bloggers that simply feel the need to weigh in on topics they have not researched and do not truly understand.

Should we not be wary of critiques that are published and circulated on large platforms without any oversight or review? Peer-review is the heart of science publication, a system that always separates the good from the bad and the ugly. The review system is in place because it is the only way to ensure quality. There is no doubt that Geoffrey passed his own article to a proper peer review panel — people who are experts in the science of science communication and social media — to receive their valuable inputs on his opinion, because to neglect to do so would be, at its core, unscientific. It would be simply hypocritical for him to publish an editorial that attacks the work of others without first passing it in front of the critical eye of an outside editor, if only to fix even inane details like Geoffrey’s constant desire to put phrases “in quotes”. These review and editing steps are essential to ensure that a critique isn’t biased, that criticisms are fair and well supported, and that they are not based on misunderstandings.

But, as Geoffrey points out, real danger arises because bloggers are truly unaccountable. Nothing they write is attached to their names, and thus their own reputations are never on the line. While scientists are held accountable for their errors in thought or judgment, science writers are never tainted by shoddy work (e.g. Jonah Lehrer, whose reputation as a writer and scientific thinker has not been tarnished in any way since his rampant plagiarism was revealed). This lack of balance when it comes to accountability leaves scientists open to unjust attacks by bloggers — attacks which can go viral and truly impact careers. And, unless they wish to take to the internet themselves, these scientists have no way to reply to scathing critiques.

We simply cannot trust non-experts to grasp the nuances necessary to discuss scientific research and engage in science communication. Their lack of accountability for their opinions as compared to “traditional” outlets is downright dangerous, and thus we must do our best ensure that journals and magazines with wide readership do not give credence to unsupported remarks without proper review. While we cannot stifle “free speech”, we have to do what we can to prevent unscientific attacks from damaging the careers of hardworking scientists and writers. This means that major journals should be wary of criticisms, even internal ones, if they have not been properly vetted.

After all, perhaps the biggest crime in this age of rapid assessment and dissemination would be for a journal to publish a critique that lacks the very tenets of scientific quality that the peer review system was created to maintain.

 

 

Interested in social media and science communication You may also be interested in my Social Media for Scientists series, including the Social Networking for Scientists Wiki.

UPDATE: Just to be clear, this post is very much meant to be taken tongue-in-cheek.

Author: Christie Wilcox

Dr. Christie Wilcox is a science writer based in the greater Seattle area. Her bylines include National Geographic, Popular Science, and Quanta. Her debut book, Venomous, released August 2016 (Scientific American/FSG Books). To learn more about her life and work, check out her webpage or follow her on Twitter, Google+, or Facebook.

17 thoughts on “On The Danger Posed By Non-Expert Critiques Published To Large Audiences”

  1. Don’t just tar bloggers. Those who understand the science on an issue should go on newspapers’ sites and call out those who get it wrong. This happens with some frequency in the LA Times on their “health” articles, a great many of which are merely “science-y.” Don’t just complain — do something about it when you see people getting it wrong. And don’t be too assume that those with Ph.D.s necessarily get it right. I find plenty of studies with gross limitations, unfounded conclusions, etc. — as does everyone who rubs two brain cells together before reading and has some knowledge of science.

  2. I think we’re maybe being a bit hard on Geoffrey – perhaps because he does have a privileged pulpit and used some ill-chosen words – but I don’t think he was tarring all blogs with the same brush. Half of the editorial promotes Rosie Redfield’s excellent efforts in scientifically questioning the arsenic paper – for which she was, herself, unfairly criticized. And it’s not hard to find egregious examples of cowardly critiques. I’m not sure explicitly pointing out such critiques is helpful either (insofar as that might add fuel to the fire). That said, we all have filters and the internet is not exactly known for its editorial prowess. There is a strong cadre of excellent amateur blogs and these complement conventional “curated” professional blogs and other conduits. There’s also dross. Free speech is paramount and we tend to be capable of resolving the good from the bad. The alternative is not worth considering.

    1. “Free speech is paramount and we tend to be capable of resolving the good from the bad. The alternative is not worth considering.”

      Damned straight!

  3. Rush Limbaugh, Alex Jones, Glenn Beck, The people at “Natural News,” and all manner of “New-Age” sites (Jaques Fresco to think of one) are examples of “Bloggers” in Social media that run people’s names through the mud.

    And even though the audiences of these people might simply be partisans (the choir being sung to by their preachers), but they are connected to vast social networks that look primarily for “likes,” ‘shares,” and “votes” to spread their gospel.

  4. My response to this is that I wish peer reviewed study and scientifically sound information was more easily accessible to the main stream. Most peer reviewed papers, scientifically constructed experiments and studies are hidden behind cryptic publications with non-consumer friendly paywalls, subscriptions or fees to see. I don’t know that that is the best way to get the right and real information out any more. Today, people do their, and I use this term loosely, *research*, via Google and Wikipedia then they regurgitate their findings in non-peer reviewed blogs and chats and posts. As a responsible science community we might want to look at making the scientifically backed data, resources and practices more available to help curb this. In not doing this we risk fostering our own minitrue by omission.
    I know one of my pet peeves in todays infographic and blog world is insignificant sample sizes used in surveys or ‘studies’ and their results being touted as impactful data. We have too many infographics and not enough scientific methodology surrounding us every day, how is a general reader to know the difference between fact and pseudo fact if we don’t refresh their memories on science class and what constitutes a hypothesis versus a conclusion.
    Plus, and completely self-servingly, I would like to be able to read the initial studies myself (including the methodology) in these fancy journals and not just what a science journalist has gleaned and spun from it. 😉

    1. And yet we have Adam Frank and Kevin Trenberth and Brian Greene here in Seattle with their Science is Fun! circus, hoping to draw people into science by avoiding all the nasty little details that actually make science work. Hey Kids, don’t worry about that Math! Just have Fun with Science!

      More science cartoons, more cartoon science.

    2. Re: methodology and availability

      To a large extent, scientists have shot themselves in the foot over the last several decades by giving less and less space in pubs to both data and methodology. Journals like Nature and Geology hardly publish any methods at all.

      And, having tried to compare data from one paper to another by the same authors, I can assure you NO ONE checks the data, calculations or diagrams. If the Reinhart-Rogoff controversy isn’t enough to tell us that, what else do we need?

      Another problem is that many abstracts, quite frankly, stink. They meander about yapping about all sorts of irrelevant stuff and never put the meat on the table. If you’re looking for the bottom line #s from a published paper – the thing all the conclusions are based on – they’re commonly not in the abstract! In geology, at any rate, lots of abstracts contain promotional material at the expense of the actual results: “This is the first study to ascertain the diameter of a microscopic pinhead while measuring in a vacuum chamber submerged in a blue ceramic bathtub”. (so, then, what was the diameter and why do we care?)

      And for sure, in addition to insignificant sample sizes, we can pull out a few more targets for surveys: conclusions drawn from results that even the author acknowledges are insignificant (hey, they gotta get some miles outta this paper, right?); and any number of flawed assumptions used to generate the surveys in the first place as well as to interpret their results.

      Yeah, there’s a LOT of work to be done.

    1. If people aren’t allowed to make their own analyses and form and express their own opinions, what the hell good is critical thinking?

      🙂

  5. I believe that post-publication peer-review by hundreds of persons knowledgeable about a subject is far more meaningful than pre-publication peer-review by three reviewers who share the biases of a journal’s editor. I’ve seen plenty of garbage published in peer-reviewed journals. The most recent example of true junk science that passed peer-review that I can think of is the recent NEJM article hypothesizing that national rates of dark chocolate consumption are related to the likelihood that a particular nation will produce Nobel prize winners, with no analysis of chocolate consumption by individuals who have won Nobel prizes. No analysis of confounding factors such as national wealth or Nobel Prize Committee biases. Not to mention the common practice by prestigious, peer-reviewed journals of publishing only “successful” clinical trials conducted by contracted firms given explicit instructions to manipulate data to maximize apparent efficacy and minimize risks of adverse side-effects — all conducted under the imprimatur of peer review, as if it were a guarantee of anything meaningful.

    –Anthony

  6. The peer-review process and the control the process by for-profit publishers and funders are what poses a serious risk to the foundations of peer-review and the scientific process. A recent study showed that when published papers by prestigious author we resubmitted to the same journals peer-review process under the name of an unknown author from an obscure school most were rejected. Only 2 in 10 even recognized that they had already published the exact same paper! Peer-reviews were often split between “it’s impossible” and “we already knew that.” Give the bloggers a break.

Comments are closed.