Corrupted Research - Exposing the Peer Review Process

Author Topic: Corrupted Research - Exposing the Peer Review Process  (Read 805 times)

0 Members and 1 Guest are viewing this topic.

Offline Optimus

  • Globalist Destroyer
  • Global Moderator
  • Member
  • *****
  • Posts: 12,802
    • GlobalGulag.com
Corrupted Research - Exposing the Peer Review Process
« on: September 16, 2016, 05:17:06 PM »
Corrupted Research - Exposing the Peer Review Process
http://ezinearticles.com/?Corrupted-Research---Exposing-the-Peer-Review-Process&id=808798

By Sydney Ross Singer

When you hear about new medical breakthroughs in the news, you will only hear about peer reviewed research. Peer reviewed means that it passed some sort of basic standards for quality. It is the gold standard of research.

But is it real gold, or fool's gold?

Medical research seems especially mystical and awe inspiring to the average person. The basic concepts of medicine, which aren't really difficult to understand, are deliberately cloaked in Latin terminology and other confusing jargon, making medical knowledge and theory seem out of reach to the common person.

After all, every profession needs to make you think you need their services. Lawyers make the legal system so complex and confusing that the average person is completely helpless without legal assistance. Accountants help the IRS tweak the tax code to make it virtually impossible for the average person to know it all, understand it all, or follow all the changes constantly being made. Doctors have made it so you cannot request medical tests or take drugs without their prescription. You name a profession, and you can see ways it perpetuates itself by disempowering the public.

What about the medical research profession?

One of the most important things to know about medical research is that, above all else, it is a profession. Researchers make their money usually from both salaries and grants. The job of the researcher is to find a sponsor for their special type of research. The more research projects and publications they get, the more sponsors they have, and the higher their income. And if a researcher comes up with a patentable device or drug, there are intellectual property rights to throw into the compensation package.

This means that researchers do not work for free. They are mercenary. There may be very interesting and, by social standards, very important research that needs to be done that they could do. But unless, and until, they are paid to do it, the work does not get done.

This means that the funding sources of research, be it the government or private sources, determine what research is actually done. Most of the money for medical research comes from the private sector, usually drug companies, which is why drugs dominate modern medicine. Government funding is little different, since it comes from agencies that are highly lobbied by drug companies, and are run by doctors trained and paid by drug companies. Medicine is a public-private partnership, giving the pharmaceutical industry government-like power over the culture and its healthcare research.

Research into non-drug alternatives are rarely done for this reason. It is also why medicine claims it knows very little about the causes of most diseases of our time. They care much more about the treatment than the cause, since treatment is profitable for the research sponsors, while knowing the cause can lead to prevention, which translates in medical terminology into "unbillable".

Of course, this is a pretty big scam to pull off. Consider its scope. The public is taxed and begged for donations to pay for medical research that goes into discovering drug treatments that the public will later have to pay incredibly high prices to obtain, and only after paying the doctor for an office visit to get a prescription. And if the drug gives nasty side effects it only leads to more calls for more money to find newer drugs with different side effects.

Is the public getting a good deal here? How do you know the research is scientifically valid? Where is the quality control?

Since most people have been conditioned into believing that they cannot judge medical research unless they have a Ph.D., M.D., N.D., or other license, the research is evaluated for you by other scientists in the field. This is called peer review.

Scientists doing research, as with all professions, belong to a club of like-minded researchers in the same business, promoting their services and products. They belong to the same kinds of industries, such as universities or large multinational drug corporations. They have the same education, which means they all think alike. The purpose of their organization is to provide standards of practice that are supposed to assure quality. Any research must first be somehow reviewed by the peers of this club to make sure the quality guidelines are met, before the research can be published.

Yet, despite this assurance of quality, the fact is that most of what is considered true today will be discarded as false in the future. "Ninety percent of what you learn in medical school will be out of date and considered obsolete in ten years," we were told by the dean of students when I began medical school. This means that most of what doctors learn is wrong. It also means that the new information which will come in 10 years to replace and update current misconceptions and errors will also be considered obsolete in another ten years' time. This is a powerful indictment of medical research, which seems to produce little more than temporary information.

It also means that the peer review process does not assure truth. It only means that current standards of practice are followed. Currently, this allows conflicts of interest, since most drug research is paid for by the companies that produce and profit from those same drugs. Even research testing drug side effect hazards is paid for by the companies standing to lose, big time, if their drugs are proven unsafe. Since drug companies have their bottom line, and not unselfish service to mankind, as their reason for existing, it is extremely unwise to trust them with research into their own products. Researchers take no oaths of honesty or integrity. They work for whoever pays them, and they are not above fudging the results to get the desired outcome.

This is not good science, of course. But it is science as practiced in a culture that has professionalized research into a profit-making enterprise. It is not, as people fantasize, the sacred trust needed for helping the sick and injured with unselfish devotion. Medical research is about making money coming up with newly patented drugs to replace the ones that have just gone off-patent and are being sold too cheaply by generic drug competitors.

Peer review does not stop the conflict of interest. Medical journals accept conflict of interest, knowing that it is the way medical research is done. Knowing what research is coming down the pike allows these insiders to get a whiff of new drug developments before the public knows, so they can change their investment portfolio mix for anticipated stock price adjustments.

Peer review also keeps out alternative theories and ways of doing research. All innovation threatens the status quo, and those who control the peer review process, like Supreme Court Justices, can decide on which cases to hear and which to ignore. They are gatekeepers of the status quo, which keeps the current powers that be in power. Since the medical peer review boards are the culture's final authority on quality, there is no way to challenge their decisions. The quality of the research may in fact be poor, which is evident when you see how many research articles criticize other, peer reviewed research as being flawed in some way. Any researcher will tell you that lots of bad research is done that gets published. However, it's a publish or perish world. Since researchers and their peers are all caught in this same publish or perish demand, and review one another's work, they subtly collude to get as much research as they can funded and published. You scratch my back and I'll scratch yours. They argue among themselves in the journals as to the quality of their work, and for sure there is some competition among scientists as they solicit grants from the same sources to do pretty much the same thing. But there is overall an understanding that, as peers, united they stand and divided they fall.

Of course, this means that peer review is nothing more than a political arrangement for research workers, like a guild or union. It's goal is to keep control over their field, suppress the competition, and assure continued cash flow. It has nothing to do with science, the systematic search for truth, which must not be tainted by financial motives or tempted by personal gain.

So the next time you hear a news story about some new wonder drug, look for the union label. If it is peer reviewed, there's a ninety percent change it's wrong.

Sydney Ross Singer is a medical anthropologist and director of the Institute for the Study of Culturogenic Disease, located in Hawaii. His unique form of applied medical anthropology searches for the cultural/lifestyle causes of disease. His working assumption is that our bodies were made to be healthy, but our culture and the attitudes and behaviors it instills in us can get in the way of health. By eliminating these causes, the body is allowed to heal. Since most diseases of our time are caused by our culture/lifestyle, this approach has resulted in many original discoveries into the cause, and cure, of many common diseases. It also makes prevention possible by eliminating adverse lifestyle practices. Sydney works with his co-researcher and wife, Soma Grismaijer, and is the author of several groundbreaking health books.

Sydney's background includes a B.S. in biology from the University of Utah; an M.A. degree from Duke University in biochemistry and anthropology; 2 years of medical school training at UTMB at Galveston, along with Ph.D. training in medical humanities.

Article Source: http://EzineArticles.com/808798
“The Constitution is not an instrument for the government to restrain the people,
it's an instrument for the people to restrain the government.” – Patrick Henry

>>> Global Gulag Media & Forum <<<

Offline Optimus

  • Globalist Destroyer
  • Global Moderator
  • Member
  • *****
  • Posts: 12,802
    • GlobalGulag.com
Re: Corrupted Research - Exposing the Peer Review Process
« Reply #1 on: September 16, 2016, 05:44:52 PM »
Peer review: a flawed process at the heart of science and journals
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)

WHAT IS PEER REVIEW?


My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me', which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance.

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'

DOES PEER REVIEW `WORK' AND WHAT IS IT FOR?


But does peer review `work' at all? A systematic review of all the available evidence on peer review concluded that `the practice of peer review is based on faith in its effects, rather than on facts'.

The following popper user interface control may not be accessible. Tab to the next button to revert the control to an accessible version.
Destroy user interface control2 But the answer to the question on whether peer review works depends on the question `What is peer review for?'.

One answer is that it is a method to select the best grant applications for funding and the best papers to publish in a journal. It is hard to test this aim because there is no agreed definition of what constitutes a good paper or a good research proposal. Plus what is peer review to be tested against? Chance? Or a much simpler process? Stephen Lock when editor of the BMJ conducted a study in which he alone decided which of a consecutive series of papers submitted to the journal he would publish. He then let the papers go through the usual process. There was little difference between the papers he chose and those selected after the full process of peer review.

The following popper user interface control may not be accessible. Tab to the next button to revert the control to an accessible version.
Destroy user interface control1 This small study suggests that perhaps you do not need an elaborate process. Maybe a lone editor, thoroughly familiar with what the journal wants and knowledgeable about research methods, would be enough. But it would be a bold journal that stepped aside from the sacred path of peer review.

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers. Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust.

THE DEFECTS OF PEER REVIEW


So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.

Slow and expensive

Many journals, even in the age of the internet, take more than a year to review and publish a paper. It is hard to get good data on the cost of peer review, particularly because reviewers are often not paid (the same, come to that, is true of many editors). Yet there is a substantial `opportunity cost', as economists call it, in that the time spent reviewing could be spent doing something more productive—like original research. I estimate that the average cost of peer review per paper for the BMJ (remembering that the journal rejected 60% without external review) was of the order of £100, whereas the cost of a paper that made it right though the system was closer to £1000.

The cost of peer review has become important because of the open access movement, which hopes to make research freely available to everybody. With the current publishing model peer review is usually `free' to authors, and publishers make their money by charging institutions to access the material. One open access model is that authors will pay for peer review and the cost of posting their article on a website. So those offering or proposing this system have had to come up with a figure—which is currently between $500-$2500 per article. Those promoting the open access system calculate that at the moment the academic community pays about $5000 for access to a peer reviewed paper. (The $5000 is obviously paying for much more than peer review: it includes other editorial costs, distribution costs—expensive with paper—and a big chunk of profit for the publisher.) So there may be substantial financial gains to be had by academics if the model for publishing science changes.

There is an obvious irony in people charging for a process that is not proved to be effective, but that is how much the scientific community values its faith in peer review.

Inconsistent

People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. I regularly received letters from authors who were upset that the BMJ rejected their paper and then published what they thought to be a much inferior paper on the same subject. Always they saw something underhand. They found it hard to accept that peer review is a subjective and, therefore, inconsistent process. But it is probably unreasonable to expect it to be objective and consistent. If I ask people to rank painters like Titian, Tintoretto, Bellini, Carpaccio, and Veronese, I would never expect them to come up with the same order. A scientific study submitted to a medical journal may not be as complex a work as a Tintoretto altarpiece, but it is complex. Inevitably people will take different views on its strengths, weaknesses, and importance.

So, the evidence is that if reviewers are asked to give an opinion on whether or not a paper should be published they agree only slightly more than they would be expected to agree by chance. (I am conscious that this evidence conflicts with the study of Stephen Lock showing that he alone and the whole BMJ peer review process tended to reach the same decision on which papers should be published. The explanation may be that being the editor who had designed the BMJ process and appointed the editors and reviewers it was not surprising that they were fashioned in his image and made similar decisions.)

Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers.

    Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits'

    Reviewer B: `It is written in a clear style and would be understood by any reader'.

This—perhaps inevitable—inconsistency can make peer review something of a lottery. You submit a study to a journal. It enters a system that is effectively a black box, and then a more or less sensible answer comes out at the other end. The black box is like the roulette wheel, and the prizes and the losses can be big. For an academic, publication in a major journal like Nature or Cell is to win the jackpot.

Bias

The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants. The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci. They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.

This is known as the Mathew effect: `To those who have, shall be given; to those who have not shall be taken away even the little that they have'. I remember feeling the effect strongly when as a young editor I had to consider a paper submitted to the BMJ by Karl Popper. I was unimpressed and thought we should reject the paper. But we could not. The power of the name was too strong. So we published, and time has shown we were right to do so. The paper argued that we should pay much more attention to error in medicine, about 20 years before many papers appeared arguing the same.

The editorial peer review process has been strongly biased against `negative studies', i.e. studies that find an intervention does not work. It is also clear that authors often do not even bother to write up such studies. This matters because it biases the information base of medicine. It is easy to see why journals would be biased against negative studies. Journalistic values come into play. Who wants to read that a new treatment does not work? That's boring.

We became very conscious of this bias at the BMJ; we always tried to concentrate not on the results of a study we were considering but on the question it was asking. If the question is important and the answer valid, then it must not matter whether the answer is positive or negative. I fear, however, that bias is not so easily abolished and persists.

The Lancet has tried to get round the problem by agreeing to consider the protocols (plans) for studies yet to be done. If it thinks the protocol sound and if the protocol is followed, the Lancet will publish the final results regardless of whether they are positive or negative. Such a system also has the advantage of stopping resources being spent on poor studies. The main disadvantage is that it increases the sum of peer reviewing—because most protocols will need to be reviewed in order to get funding to perform the study.

Abuse of peer review

There are several ways to abuse the process of peer review. You can steal ideas and present them as your own, or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened. Drummond Rennie tells the story of a paper he sent, when deputy editor of the New England Journal of Medicine, for review to Vijay Soman. Having produced a critical review of the paper, Soman copied some of the paragraphs and submitted it to another journal, the American Journal of Medicine. This journal, by coincidence, sent it for review to the boss of the author of the plagiarized paper. She realized that she had been plagiarized and objected strongly. She threatened to denounce Soman but was advised against it. Eventually, however, Soman was discovered to have invented data and patients, and left the country. Rennie learnt a lesson that he never subsequently forgot but which medical authorities seem reluctant to accept: those who behave dishonestly in one way are likely to do so in other ways as well.

HOW TO IMPROVE PEER REVIEW?


The most important question with peer review is not whether to abandon it, but how to improve it. Many ideas have been advanced to do so, and an increasing number have been tested experimentally. The options include: standardizing procedures; opening up the process; blinding reviewers to the identity of authors; reviewing protocols; training reviewers; being more rigorous in selecting and deselecting reviewers; using electronic review; rewarding reviewers; providing detailed feedback to reviewers; using more checklists; or creating professional review agencies. It might be, however, that the best response would be to adopt a very quick and light form of peer review—and then let the broader world critique the paper or even perhaps rank it in the way that Amazon asks users to rank books and CDs.

I hope that it will not seem too indulgent if I describe the far from finished journey of the BMJ to try and improve peer review. We tried as we went to conduct experiments rather than simply introduce changes.

The most important step on the journey was realizing that peer review could be studied just like anything else. This was the idea of Stephen Lock, my predecessor as editor, together with Drummond Rennie and John Bailar. At the time it was a radical idea, and still seems radical to some—rather like conducting experiments with God or love.

Blinding reviewers to the identity of authors

The next important step was hearing the results of a randomized trial that showed that blinding reviewers to the identity of authors improved the quality of reviews (as measured by a validated instrument). This trial, which was conducted by Bob McNutt, A T Evans, and Bob and Suzanne Fletcher, was important not only for its results but because it provided an experimental design for investigating peer review. Studies where you intervene and experiment allow more confident conclusions than studies where you observe without intervening.

This trial was repeated on a larger scale by the BMJ and by a group in the USA who conducted the study in many different journals. Neither study found that blinding reviewers improved the quality of reviews. These studies also showed that such blinding is difficult to achieve (because many studies include internal clues on authorship), and that reviewers could identify the authors in about a quarter to a third of cases. But even when the results were analysed by looking at only those cases where blinding was successful there was no evidence of improved quality of the review.

Opening up peer review

At this point we at the BMJ thought that we would change direction dramatically and begin to open up the process. We hoped that increasing the accountability would improve the quality of review. We began by conducting a randomized trial of open review (meaning that the authors but not readers knew the identity of the reviewers) against traditional review. It had no effect on the quality of reviewers' opinions. They were neither better nor worse. We went ahead and introduced the system routinely on ethical grounds: such important judgements should be open and acountable unless there were compelling reasons why they could not be—and there were not.

Our next step was to conduct a trial of our current open system against a system whereby every document associated with peer review, together with the names of everybody involved, was posted on the BMJ's website when the paper was published. Once again this intervention had no effect on the quality of the opinion. We thus planned to make posting peer review documents the next stage in opening up our peer review process, but that has not yet happened—partly because the results of the trial have not yet been published and partly because this step required various technical developments.

The final step was, in my mind, to open up the whole process and conduct it in real time on the web in front of the eyes of anybody interested. Peer review would then be transformed from a black box into an open scientific discourse. Often I found the discourse around a study was a lot more interesting than the study itself. Now that I have left I am not sure if this system will be introduced.

Training reviewers

The BMJ also experimented with another possible way to improve peer review—by training reviewers. It is perhaps extraordinary that there has been no formal training for such an important job. Reviewers learnt either by trial and error (without, it has to be said, very good feedback), or by working with an experienced reviewer (who might unfortunately be experienced but not very good).

Our randomized trial of training reviewers had three arms: one group got nothing; one group had a day's face-to-face training plus a CD-rom of the training; and the third group got just the CD-rom. The overall result was that training made little difference. The groups that had training did show some evidence of improvement relative to those who had no training, but we did not think that the difference was big enough to be meaningful. We cannot conclude from this that longer or better training would not be helpful. A problem with our study was that most of the reviewers had been reviewing for a long time. `Old dogs cannot be taught new tricks', but the possibility remains that younger ones could.

TRUST IN SCIENCE AND PEER REVIEW


One difficult question is whether peer review should continue to operate on trust. Some have made small steps beyond into the world of audit. The Food and Drug Administration in the USA reserves the right to go and look at the records and raw data of those who produce studies that are used in applications for new drugs to receive licences. Sometimes it does so. Some journals, including the BMJ, make it a condition of submission that the editors can ask for the raw data behind a study. We did so once or twice, only to discover that reviewing raw data is difficult, expensive, and time consuming. I cannot see journals moving beyond trust in any major way unless the whole scientific enterprise moves in that direction.

CONCLUSION


So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.

Notes


Richard Smith was editor of the BMJ and chief executive of the BMJ Publishing Group for 13 years. In his last year at the journal he retreated to a 15th century palazzo in Venice to write a book. The book will be published by RSM Press [www.rsmpress.co.uk], and this is the second in a series of extracts that will be published in the JRSM.

References


1. Lock S. A Difficult Balance: Editorial Peer Review In Medicine. London: Nuffield Provincials Hospital Trust, 1985.
2. Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review: a systematic review. JAMA 2002;287: 2784-6. [PubMed]
3. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 1998;280: 237-40. [PubMed]
4. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. BMJ 2004;328: 673. [PMC free article] [PubMed]
5. Wennerås C, Wold A. Sexism and nepotism in peer-review. Nature 1997;387: 341-3. [PubMed]
6. Peters D, Ceci S. Peer-review practices of psychological journals: the fate of submitted articles, submitted again. Behav Brain Sci 1982;5: 187-255.
7. McIntyre N, Popper K. The critical attitude in medicine: the need for a new ethics. BMJ 1983;287: 1919-23. [PMC free article] [PubMed]
8. Horton R. Pardonable revisions and protocol reviews. Lancet 1997; 349: 6. [PubMed]
9. Rennie D. Misconduct and journal peer review. In: Godlee F, Jefferson T, eds. Peer Review In Health Sciences, 2nd edn. London: BMJ Books, 2003: 118-29.
10. McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. A randomized trial. JAMA 1990;263: 1371-6. [PubMed]
11. Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D, the PEER investigators. Does masking author identity improve peer review quality: a randomised controlled trial. JAMA 1998;280: 240-2. [PubMed]
12. van Rooyen S, Godlee F, Evans S, Smith R, Black N. Effect of blinding and unmasking on the quality of peer review: a randomised trial. JAMA 1998;280: 234-7. [PubMed]
13. van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ 1999;318: 23-7. [PMC free article] [PubMed]
“The Constitution is not an instrument for the government to restrain the people,
it's an instrument for the people to restrain the government.” – Patrick Henry

>>> Global Gulag Media & Forum <<<

Offline Optimus

  • Globalist Destroyer
  • Global Moderator
  • Member
  • *****
  • Posts: 12,802
    • GlobalGulag.com
Re: Corrupted Research - Exposing the Peer Review Process
« Reply #2 on: September 16, 2016, 06:16:35 PM »

Thank you for your post. That brings up a very interesting point that I had been meaning to address.


Example 39

Science as a Public Good - Science as preached and practiced by the established higher educational institutions, corporate funded think tanks, and comercial businesses produces scientific data that is biased at best and deliberatly false at worst.


The questionalable credibility of the Peer Review System

Experimental error and lack of reproducibility have dogged scientific research for decades. Of even greater concern are proliferating cases of outright fraud.

Medicine and the social sciences are particularly prone to bias, because the observer (presumably a white-coated scientist) cannot so easily be completely removed from his or her subject.

Double-blind tests (where neither the tester and the subject know for sure whether the test is real or just a control) are now required for many experiments and trials in both fields.


The Myth of Science as a Public Good (by Terence Kealey)
https://www.youtube.com/watch?v=C_PVI6V6o-4

Vice Chancellor of the University of Buckingham (Britain's only independent university), Terence Kealey is a vocal critic of government funding of science. His first book, 'The Economic Laws of Scientific Research,' argues that state funding of science is neither necessary nor beneficial, a thesis that he developed in his recently published analysis of the causes scientific progress, 'Sex, Science and Profits.' In it, he makes the stronger claim that not only is government funding not beneficial, but in fact measurably obstructs scientific progress, whilst presenting an alternative, methodologically-individualist understanding of 'invisible colleges' within which science resembles a private, not a public, good.

Recorded at Christ Church, University of Oxford, on 22nd May 2009.


Ref.

For Science's Gatekeepers, a Credibility Gap (2006)
By LAWRENCE K. ALTMAN, M.D.
http://www.nytimes.com/2006/05/02/health/02docs.html?pagewanted=all&_r=0
Recent disclosures of fraudulent or flawed studies in medical and scientific journals have called into question as never before the merits of their peer-review system.

Impartial judgment by the "gatekeepers" of science: fallibility and accountability in the peer review process.
Hojat M, Gonnella JS, Caelleigh AS.
http://www.ncbi.nlm.nih.gov/pubmed/12652170

Misconduct in science communication and the role of editors as science gatekeepers
https://vimeo.com/86692444

Science Journal Pulls 60 Papers in Peer-Review Fraud
By HENRY FOUNTAINJULY 10, 2014
http://www.nytimes.com/2014/07/11/science/science-journal-pulls-60-papers-in-peer-review-fraud.html

Report finds massive fraud at Dutch universities
http://www.nature.com/news/2011/111101/full/479015a.html

Scientific fraud, sloppy science – yes, they happen
http://theconversation.com/scientific-fraud-sloppy-science-yes-they-happen-13948

“The Constitution is not an instrument for the government to restrain the people,
it's an instrument for the people to restrain the government.” – Patrick Henry

>>> Global Gulag Media & Forum <<<