Tuesday, November 10, 2009

Don't believe everything you read about genes and disease in prestigious journals like Science and Nature,


A recent article, describes & highlights some of the pressures scientists & researchers face in a professional setting, which potentially leads to research & publication of falsehoods. Some of the reasons why many research results are outright wrong, are:

* Scientists behaving badly:
"... Outright scientific fraud is rare, but less deviant behavior may be much more common. For example, researchers may run multiple statistical tests on their data: they keep analysing the results in slightly different ways (known as "data mining") until they get a P-value less than 0.05. This is tempting because it is much easier to get one's research published if the findings are "statistically significant" (i.e. the P-value is less than 0.05) – a phenomenon known as "publication bias".

* Pressure to perform:
"...

The social environment in which research occurs places scientists under pressure to perform. These institutional pressures have the well-intentioned aim of encouraging high productivity and performance, measured by the amount and quality of publications, and success in attracting research funding from government and charitable agencies.

However, there is an inherent tension between the scientific process, where success is often unpredictable, and the means by which research productivity is frequently assessed. The criteria currently used to assess a scientist's career and make decisions about future funding, salary and tenure may be an important factor encouraging departure from the ideals of scientific integrity.

But institutional pressures of this sort are unlikely to be solely responsible...

..."

More about this article here: [The Link]
A related article from PLoS Medicine: [Why Most Published Research Findings are False]


Friday, November 6, 2009

Promises, Predictions, Visions ... is it leading to Promissary Culture?

As researchers (or otherwise) we all make predictions about what happens next. Sometimes we keep those to ourselves, sometimes we share with our friends and colleagues and sometimes we share it with the world through media.

Of course, share with your friends and colleagues is one thing, but sharing with a much bigger audience is another matter altogether... then you have to think about credibility at all levels (personal, scientific, group, institutional...etc), couple that to decision/policy making and politics then its a whole other ball game.

Recently came across an article in the-scientist, which talks about these issues. Couple of excerpts from the article.

...
"Of course, scientists have a strong incentive to make bold predictions—namely, to obtain funding, influence, and high-profile publications. But while few will be disappointed when worst-case forecasts fail to materialize, unfulfilled predictions—of which we’re seeing more and more—can be a blow for patients, policy makers, and for the reputation of science itself."
...
"..“soundbite” media culture that demands uncomplicated, definitive, and sensational statements plays a significant role. “It’s [the media] who put the most pressure on scientists to make predictions,” he says. And in a radio or TV interview that allows perhaps only 10 or 20 seconds for an answer, “it’s very easy then to inadvertently mislead.”

But it might also pay scientists—financially and politically—to go along with such demands, and to indulge in what Joan Haran, Cesagen Research Fellow at Cardiff University, UK, diplomatically calls “discursive overbidding,” whereby they talk up the potential value of work for which they seek the support of funds, changes in legislation or public approval."

..

The article also includes useful tips on how to predict responsibly

1. Avoid simple timelines
2. Learn from history
3. State the caveats
4. Remember what you don't know


More of the article here: [The Link]


Tuesday, September 29, 2009

The last days of polymath

Monomath vs polymaths, specialist vs generalist, depth vs breadth in scientific knowledge. As many you might have noticed the degree of specialization required in many technical fields is very high. There is a high barrier to entry in most fields, requiring years of dedicated & focused work to make any meaningful contribution (no matter how small). An interesting article discusses these issues.

"The question is whether their [polymaths] loss has affected the course of human thought. Polymaths possess something that monomaths do not. Time and again, innovations come from a fresh eye or from another discipline. Most scientists devote their careers to solving the everyday problems in their specialism. Everyone knows what they are and it takes ingenuity and perseverance to crack them. But breakthroughs—the sort of idea that opens up whole sets of new problems—often come from other fields. The work in the early 20th century that showed how nerves work and, later, how DNA is structured originally came from a marriage of physics and biology. Today, Einstein’s old employer, the Institute for Advanced Study at Princeton, is laid out especially so that different disciplines rub shoulders. I suspect that it is a poor substitute.

Isaiah Berlin once divided thinkers into two types. Foxes, he wrote, know many things; whereas hedgehogs know one big thing. The foxes used to roam free across the hills. Today the hedgehogs rule. "

More about this here: [The Link]

Friday, August 28, 2009

Revision Control Systems

Be it a source code of some software development project or various writings for some publication in journal or conference proceedings, we inevitably create different versions of the documents or the source code. Instead of simply naming them as v1, v2.. or some scheme similar to it, there are many tools available which help us maintain different versions in a more efficient manner. Some of them mentioned here:

Both GIT & Mercurial is based on distributed peer-to-peer model, while SVN & CVS are based on centralized client-server model. A review of these revision control systems is presented in an article in Communications of the ACM Magazine, Sept 2009 issue. The main conclusions of this survey is reproduced here:
"Choosing a revision-control system is a question with a surprisingly small number of absolute answers. The fundamental issues to consider are what kind of data your team works with, and how you want your team members to interact. If you have masses of frequently edited binary data, a distributed revision- control system may simply not suit your needs. If agility, innovation, and remote work are important to you, the distributed systems are far more likely to suit your needs; a centralized system may slow your team down in comparison.

There are also many second-order considerations. For example, firewall management may be an issue: Mercurial and Subversion work well over HTTP and with SSL (Secure Sockets Layer), but Git is unusably slow over HTTP. For security, Subversion offers access controls down to the level of individual files, but Mercurial and Git do not. For ease of learning and use, Mercurial and Subversion have simple command sets that resemble each other (easing the transition from one to the other), whereas Git exposes a potentially overwhelming amount of complexity. When it comes to integration with build tools, bug databases, and the like, all three are easily scriptable. Many software development tools already support or have plug-ins for one or more of these tools.

Given the demands of portability, simplicity, and performance, I usually choose Mercurial for new projects, but a developer or team with different needs or preferences could legitimately choose any of them and be happy in the long term. We are fortunate that it is easy to interoperate among these three systems, so experimentation with the unknown is simple and risk-free."

More of this from the article here: Making Sense of Revision Control Systems, by Brayn O'Sullivan

Managing bibliographies

Having reference lists is one of the most indispensable tools a researcher must posses. Whether is a journal or article from conference proceedings, to books to web-clippings. etc. Many tools are available for a researcher to manage their reference lists. Here are a few

  • Bibtex formatted text files: www.bibtex.org very useful when writing articles using Latex. Can also use various bibtex tools for searching, converting to other formats. Bibtex Tools
  • Mendeley: www.mendeley.org An integrated tool where you can view pdfs, add metadata, extract references from various publication databases based on doi, pubmed, arxiv, ..etc. Also has web interface (linked to a web account) through which you can view and share publications.
  • Zetero: www.zotero.org Similar to Mendeley and offers pretty much the same features, but this one is offered as a firefox web browser extension, while Mendeley is a standalone program. Not sure which one is better. Both have active development (frequent updates with new features). The verdict is still out there. Decide for yourself which one suits you better.
  • Besides the above there are few commercial software to manage bibliographies. A comparison of a many bibliography management software is mentioned here.

Tuesday, July 14, 2009

When science meets politics and policy, the outcome may depend more on values than on objectivity

An interesting article was published recently about the role of scientific enquiry when mixed in policy making and politics. This experince shows that it not straightforward (as most of us suspected!). It is (yet another) motivation for more science education or familiarity of the general public. But, it also gives an indication to the scientists to be abit more smart and engange in the public debates. For being smart, one insight this article clearly points to is to pose the context and motivate the objectivity of science w.r.t what people care about. And, not necessarily impose to the public two constraints simultaneously - one is to learn more about science and its methods and second is to engage in a debate. The two issues are separate, best handled separately.

Here is an excerpt from the article:
"The Battle of Bull Run had finally ended. The scientific debate over the effects of logging became a moot point. The long and arduous road taken 20 years earlier by scientists in search of the truth ended abruptly with a political decision. What the public valued most was clean, safe drinking water secured for themselves and their children’s children. Deeply troubled by the sudden and unexpected failure of their drinking-water source, Portlanders simply decided that waiting for scientific answers was not worth further risks."

More of this article here: [the link]

Sunday, October 7, 2007

Role of scientists in society

Pielke spells out the choices scientists must make if they wish “to play a positive role in policy and politics and contribute to the sustainability of the scientific process.” He lists four “idealized roles” scientists can adopt, each of which reflects assumptions about the nature of science and democratic policymaking.

1. The pure scientist, is concerned with science for its own sake and seeks only to uncover scientific truths, regardless of their policy implications. Such a scientist has no direct connection with the policymaking process; he is content to remain cloistered in his lab while others hash out policy.

2. The second idealized role for scientists in policymaking is less detached: the science arbiter is a bit more engaged with the practical world, providing answers to policymakers’ scientific questions. He wants to ensure that science is relevant to policymaking, but in a disinterested way. He does not wish to influence the direction of policy; it is enough to know that policymakers will make decisions informed by accurate scientific assessments.

3. The third role in Pielke’s typology is the issue advocate, who pays more direct attention to policy, using science as a tool to move it in the direction he prefers. He may work for an overt advocacy organization, such as a think tank, trade association, or environmental activist group, or his advocacy may be more covert. In either case, he seeks to marshal scientific evidence and arguments in support of a specific cause.

4. Finally, the honest broker is attentive to policy alternatives but seeks to inform policy, not direct it. “The defining characteristic of the honest broker of policy alternatives,” Pielke explains, “is an effort to expand (or at least clarify) the scope of choice for decision-making in a way that allows for the decision-maker to reduce choice based on his or her own preferences and values.” The honest broker’s aim is not to dictate policy outcomes but to ensure that policy choices are made with an understanding of the likely consequences and relevant tradeoffs. Like the issue advocate, the honest broker explicitly engages in the decision-making process, but unlike the issue advocate, the honest broker has no stake or stated interest in the outcome.

This is based on the book:
The Honest Broker: Making Sense of Science and Policy in Politics

Roger A. Pielke, Jr.
Cambridge 2007


More review and comments on this book here: [The Link]