Tag: objectivity

I said science again

I realise that when a Christian starts out a post about flaws in any part of science by saying “I love science” some people see that as analogous to someone preambling the telling of a racist joke with the line “I have a black friend so it’s ok for me to think this is funny.”

I like science – but I think buying into it as a holus-bolus solution to everything is unhelpful. The scientific method involves flawed human agents who sometimes reach dud conclusions. It involves agendas that sometimes make these conclusions commercially biased. I’m not one of those people who think that the word “theory” means that something is a concept or an idea. I’m happy to accept “theories” as “our best understanding of fact”… and I know that the word is used because science has an innate humility that admits its fallibility. These dud conclusions are often ironed out – but it can take longer than it should.

That’s my disclaimer – here are some bits and pieces from two stories I’ve read today…

Science and statistics

It seems one of our fundamental assumptions about science is based on a false premise. The idea that showing a particular result is a rule based on it occuring a “statistically significant” number of times seems to have been based on an arbitrary decision in the field of agriculture in eons past. Picking a null hypothesis and finding an exception is a really fast way to establish theories. It’s just a bit flawed.

ScienceNews reports:

“The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.”

Did you know that our scientific approach, which now works on the premise of rejecting a “null hypothesis” based on “statistical significance” came from a guy testing fertiliser? And we now use it everywhere.

The basic idea (if you’re like me and have forgotten everything you learned in chemistry at high school) is that you start by assuming that something has no effect (your null hypothesis) and if you can show that it does more than five percent of the time you conclude that the thing actually does have an effect… because you apply statistics to scientific observation… here’s the story.

While its [“statistical significance”] origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.

Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.

Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts — whether testing the health effects of pollutants, the curative powers of new drugs or the effect of genes on behavior. In various forms, testing for statistical significance pervades most of scientific and medical research to this day.

A better starting point

Thomas Bayes, a clergyman in the 18th century came up with a better model of hypothesising. It basically involves starting with an educated guess, conducting experiments and your premise as a filter for results. This introduces the murky realm of “subjectivity” into science – so some purists don’t like this.

Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics.

“Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity.”

Luckily for those advocating this Bayesian method it seems, based on separate research, that objectivity is impossible.

Doing science on science

Objectivity is particularly difficult to attain because scientists are apparently prone to rejecting findings that don’t fit with their hypothetical expectations.

Kevin Dunbar is a scientist researcher (a researcher who studies scientists) – he has spent a significant amount of time studying the practices of scientists, having been given full access to teams from four laboratories. He read grant submissions, reports, and notebooks, he spoke to scientists, sat in on meetings, eavesdropped… his research was exhaustive.

These were some of his findings (as reported in a Wired story on the “neuroscience of screwing up”):

“Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.””

It seems the Bayseian model has been taken slightly too far…

The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”

You’d think that the objective scientists would accept these anomalies and change their theories to match the facts… but the arrogance of humanity creeps in a little at this point… if an anomaly arose consistently the scientists would blame the equipment, they’d look for an excuse, or they’d dump the findings.

Wired explains:

Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

Dunbar’s research suggested that the solution to this problem comes through a committee approach, rather than through the individual (which I guess is why peer review is where it’s at)…

Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored.

What turned out to be so important, of course, was the unexpected result, the experimental error that felt like a failure. The answer had been there all along — it was just obscured by the imperfect theory, rendered invisible by our small-minded brain. It’s not until we talk to a colleague or translate our idea into an analogy that we glimpse the meaning in our mistake.

Fascinating stuff. Make sure you read both stories if you’re into that sort of thing.

Does skepticism neccessitate atheism?

I am a skeptic. Proudly. I treat all truth claims with an element of distrust – and many with disdain. But I am also a Christian. And by definition a theist, and a believer in the supernatural. My skepticism extends to all other religious claims – and many claims made by subsets of Christianity. What the relationship is between Christian belief and belief in the realm of ghosts, spiritual warfare and other supernatural issues is a matter for another post. Maybe.

I think I might have previously linked to this Clive James piece on the value of skepticism – if not, I apologise. It’s mostly about climate change skepticism, though a little bit about golf ball chips (a phenomena that occurs if you have a golf course next to a potato farm).

Skepticism is great – but if you hold onto it counter to the evidence you’re not a skeptic – you’re an idiot.

The golf-ball crisp might look like a crisp, and in a moment of delusion it might taste like a crisp, and you might even swallow the whole thing, rather proud of the strength it took to chew. But if there is a weird aftertaste, it might be time to ask yourself if you have not put too much value on your own opinion. The other way of saying “What do I know?” is “What do I know?” .

Which rather tangentially brings me to the purpose of this rant. I read this article on the Friendly Atheist about the relationship between skepticism and atheism – obviously they’re linked… it’s not rocket science to suggest that most atheists are skeptics. It comes with the territory. But do all skeptics have to be atheists?

A series of posts around the atheist blogosphere suggested that the two are inextricably linked – that atheism is a logical by product of skepticism.

It started with a speech at a camp for skeptics…the speaker then had to defend his claim from some criticism…

But because I have yet to see good evidence — philosophical, scientific, or otherwise — to support religious claims, I live under the assumption there is not a god or gods above,  making me an atheist. I am still open to evidence, just with rigorous philosophical and scientific standards. A perfect example to sum up the co-existence of these labels comes from what Jacob posted in the comments to his piece. “You ask if I am agnostic about Zeus. Yes. Fairies. Yes.” But, then, Jacob is also an aZeusist, and an afairist. That is, he lives without belief in Zeus or fairies.

The problem with all these assertions – and in fact every skeptical assertion – is that it is based on one’s personal standard of evidence. Which is a personal decision. And it should be. If you want to cover your eyes, block your ears and bury your head in the sand to avoid any “evidence” that may change your opinion on any matter – then that is your choice. And I will laugh at love you even if you are wrong.

So, in the post on the Friendly Atheist the writer made this rather bold claim…

I’ll use ’skepticism’ to mean the attitude that one should scale confidence in a belief to match the evidence, and ‘atheism’ to mean the lack of belief in a god. With these definitions, the two are clearly related.

Here’s another quote from that piece.

If a person is skeptical, we expect them to embrace atheism because that’s where the evidence leads.

Only when you set fairly narrow parameters for “evidence”… I think by “evidence” you mean that’s where an understanding of the world based on scientific naturalism leads.

For some of us scientific naturalism is a good starting point, but not an end point.

Since the principle of skepticism requires religion to be treated with scrutiny, how should the movement deal with the fact that scrutiny leads to atheism?

What this post is actually saying – and the root of the problem – is that these atheists, who are skeptics, have found the evidence wanting when it comes to the question of God – but not all skeptics have put the same faith in their particular evidential methodology.

Here’s how I think those quotes could have been more honestly framed – from a skeptical standpoint – I’ll bold my changes.

If a person is skeptical, I expect them to embrace atheism because that’s where I think the evidence leads them.

Since the principle of skepticism requires religion to be treated with scrutiny, how should I deal with my opinion that scrutiny leads to atheism?

Note the similarity to the quote from Clive James’ piece – what do I know? That’s the question that should be being asked in this case. Skepticism is a subjective philosophical position that requires convincing evidence – not some sort of objective standard.

For me, I am a skeptic when it comes to scientific naturalism’s ability to answer all of life’s questions, and I am convinced by the evidence of God’s word, my observations of human nature, and my experience as a believer.

The game they play in heaven

I’ve been enjoying the thread of discussion started at Al Bain’s blogParadoxically Speaking – and the follow up threads on Simone’s… here, here, here, and here.

They’re about a favourite topic of mine – objectivity and absolutes – particularly with relation to aesthetics and if I’m understanding correctly how we can objectively define beauty based on the promise of the new creation.

Simone’s gambit in her first comment essentially nailed her definition to the proverbial mast…

“Something is beautiful if we sense (see/hear etc) in it something that reminds us of something we’ll know in eternity.”

I’m not sure I completely buy in to this argument. I think there’s beauty in things that don’t last, but it’s a temporal beauty (obviously) and there’s something about the fleeting moment that can be appreciated. Singularity is beautiful in a way that eternity can not be. I used the example of sport in particular. Because I don’t know/think that sport will be a huge part of the new creation, and while it should reflect honour and the best parts of human nature that will carry over into heaven – it actually is fun for reasons that are less eternal. The thrill of competition. The adrenalin rush that comes with a tight finish. A well executed play. These things are a meaningless chasing after the wind in the eternal scheme of things.

Will we all have equal athletic prowess in the new creation? I guess I’ve always just assumed so – but I haven’t done much thought on the matter.

If we’re all super athletes then sport is going to be a frustrating blend of perfect attack against perfect defence. An irresistible force against an immovable object. How boring. There’ll be no winning. So what’s the point. This is why I’m not worried if they play Rugby in heaven – it seems fitting. Rugby is full of boring stalemates.

Objective reporting

The discussion is continuing on my take on “Disaster Reporting” – which is no longer on the front page. It’s reminded me of an assignment I wrote in my final Journalism subject at uni. It was about objectivity and the state of modern journalism.

“In a sense the intellectual argument for objectivity has been effectively killed by post modernity. Any coverage of an event is “objective,” so long as the writer presents their view of the facts. A wider, purer form of objectivity is important at an organisational level. The media should represent the public at large, this means representing the diverse range of views and opinions on any issue.”

“The pursuit of objectivity has damaged journalism’s claim to a professional status. If journalistic practice is simply a paint-by-numbers process, trained journalists are a surplus to requirements. For journalism to be considered anything more than formulaic, and for the press to uphold its essential role in the democratic process, stories must move past the superficial and engage the intellect.”