Tag: Wired

Art funny v Science funny

My sister and my brother-in-law are locked in a continuous debate about which of the two of them is funnier. My sister maintains that her humour is “art humour” – creative, spontaneous, quick and witty. My brother-in-law is more a science man. He understands how humour works and sets up jokes five lines in advance in normal conversation. They have created an “art funny” and “science funny” dichotomy.

Which made this Wired story about a group of academics studying the nature of humour a pretty interesting read for me – and one that anybody who gets up and does public speaking where they attempt to be funny should take note.

This Venn Diagram could be the secret to understanding what makes funny funny.

There may be many types of humor, maybe as many kinds as there are variations in laughter, guffaws, hoots, and chortles. But [researcher, Peter] McGraw doesn’t think so. He has devised a simple, Grand Unified Theory of humor—in his words, “a parsimonious account of what makes things funny.” McGraw calls it the benign violation theory, and he insists that it can explain the function of every imaginable type of humor. And not just what makes things funny, but why certain things aren’t funny. “My theory also explains nervous laughter, racist or sexist jokes, and toilet humor,” he told his fellow humor researchers.

Coming up with an essential description of comedy isn’t just an intellectual exercise. If the BVT actually is an unerring predictor of what’s funny, it could be invaluable. It could have warned Groupon that its Super Bowl ad making light of Tibetan injustices would bomb. The Love Guru could’ve been axed before production began. Podium banter at the Oscars could be less excruciating. If someone could crack the humor code, they could get very rich. Or at least tenure.

And dare I say there may be less awkward pauses for laughter in sermons (even if I use humour in a sermon I never pause – just because there’s nothing worse than a pause and no laugh (it just beats out a laugh with no pause).

McGraw and Caleb Warren, a doctoral student, presented their elegantly simple formulation in the August 2010 issue of the journal Psychological Science. Their paper, “Benign Violations: Making Immoral Behavior Funny,” cited scores of philosophers, psychologists, and neuroscientists (as well as Mel Brooks and Carol Burnett).

Their theory is that the results of humour – laughter and amusement – come as a result of violations that are simultaneously seen as benign. Examples of “violations” include breaches of personal dignity, linguistic norms, social norms, and even moral norms. These violations must not pose a threat to the audience or their worldview.

I like this little sketch that went with the article too:

What do you think – is there any humour that falls outside of the “benign” category? I guess the outer limits of black humour might. Which may explain why some people don’t find it funny – benign is relative.

Tastes like bacon…

You know how they say that pigs are the animal most closely genetically related to humans based on DNA? No? Well, I may have just made that up. You’ll have to google it…

But it turns out that a taste recognising robot thinks that human flesh tastes like bacon (don’t worry, they didn’t actually feed it a human).

That cute little fella is a robot that is designed to recognise flavours. He’s meant to be used for tasting wine, but Wired tells of a scary moment when somebody put their finger in his mouth:

“The idea is that wineries can tell if a wine is authentic without even opening the bottle, amongst other more obscure uses…like “tell me what this strange grayish lump at the back of my freezer is/was.”

But when some smart aleck reporter placed his hand in the robot’s omnivorous clanking jaw, he was identified as bacon. A cameraman then tried and was identified as prosciutto.”

The inner workings of a bank robber

This is a fascinating account of the life and times of a successful bank robber from Wired. It’s fascinating. It’ll doubtless become a movie one day. Unless Oceans 11 is this guy’s story played by 11 characters (which it’s not, because it’s a remake of a rat pack movie).

Blanchard also learned how to turn himself into someone else. Sometimes it was just a matter of donning a yellow hard hat from Home Depot. But it could also be more involved. Eventually, Blanchard used legitimate baptism and marriage certificates — filled out with his assumed names — to obtain real driver’s licenses. He would even take driving tests, apply for passports, or enroll in college classes under one of his many aliases: James Gehman, Daniel Wall, or Ron Aikins. With the help of makeup, glasses, or dyed hair, Blanchard gave James, Daniel, Ron, and the others each a different look.

Over the years, Blanchard procured and stockpiled IDs and uniforms from various security companies and even law enforcement agencies. Sometimes, just for fun and to see whether it would work, he pretended to be a reporter so he could hang out with celebrities. He created VIP passes and applied for press cards so he could go to NHL playoff games or take a spin around the Indianapolis Motor Speedway with racing legend Mario Andretti. He met the prince of Monaco at a yacht race in Monte Carlo and interviewed Christina Aguilera at one of her concerts.

Read the whole thing, it’s worth it.

Bacon Odyssey seeks the best of the pig

GeekDad has launched a great bacon odyssey aiming to try as many bacon flavoured products and bacon recipes as they can lay their hands on. It’s been a heady ride filled with porky goodness. This burger looks sensational.

The series is worth watching.

I said science again

I realise that when a Christian starts out a post about flaws in any part of science by saying “I love science” some people see that as analogous to someone preambling the telling of a racist joke with the line “I have a black friend so it’s ok for me to think this is funny.”

I like science – but I think buying into it as a holus-bolus solution to everything is unhelpful. The scientific method involves flawed human agents who sometimes reach dud conclusions. It involves agendas that sometimes make these conclusions commercially biased. I’m not one of those people who think that the word “theory” means that something is a concept or an idea. I’m happy to accept “theories” as “our best understanding of fact”… and I know that the word is used because science has an innate humility that admits its fallibility. These dud conclusions are often ironed out – but it can take longer than it should.

That’s my disclaimer – here are some bits and pieces from two stories I’ve read today…

Science and statistics

It seems one of our fundamental assumptions about science is based on a false premise. The idea that showing a particular result is a rule based on it occuring a “statistically significant” number of times seems to have been based on an arbitrary decision in the field of agriculture in eons past. Picking a null hypothesis and finding an exception is a really fast way to establish theories. It’s just a bit flawed.

ScienceNews reports:

“The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.”

Did you know that our scientific approach, which now works on the premise of rejecting a “null hypothesis” based on “statistical significance” came from a guy testing fertiliser? And we now use it everywhere.

The basic idea (if you’re like me and have forgotten everything you learned in chemistry at high school) is that you start by assuming that something has no effect (your null hypothesis) and if you can show that it does more than five percent of the time you conclude that the thing actually does have an effect… because you apply statistics to scientific observation… here’s the story.

While its [“statistical significance”] origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.

Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.

Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts — whether testing the health effects of pollutants, the curative powers of new drugs or the effect of genes on behavior. In various forms, testing for statistical significance pervades most of scientific and medical research to this day.

A better starting point

Thomas Bayes, a clergyman in the 18th century came up with a better model of hypothesising. It basically involves starting with an educated guess, conducting experiments and your premise as a filter for results. This introduces the murky realm of “subjectivity” into science – so some purists don’t like this.

Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics.

“Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity.”

Luckily for those advocating this Bayesian method it seems, based on separate research, that objectivity is impossible.

Doing science on science

Objectivity is particularly difficult to attain because scientists are apparently prone to rejecting findings that don’t fit with their hypothetical expectations.

Kevin Dunbar is a scientist researcher (a researcher who studies scientists) – he has spent a significant amount of time studying the practices of scientists, having been given full access to teams from four laboratories. He read grant submissions, reports, and notebooks, he spoke to scientists, sat in on meetings, eavesdropped… his research was exhaustive.

These were some of his findings (as reported in a Wired story on the “neuroscience of screwing up”):

“Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.””

It seems the Bayseian model has been taken slightly too far…

The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”

You’d think that the objective scientists would accept these anomalies and change their theories to match the facts… but the arrogance of humanity creeps in a little at this point… if an anomaly arose consistently the scientists would blame the equipment, they’d look for an excuse, or they’d dump the findings.

Wired explains:

Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

Dunbar’s research suggested that the solution to this problem comes through a committee approach, rather than through the individual (which I guess is why peer review is where it’s at)…

Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored.

What turned out to be so important, of course, was the unexpected result, the experimental error that felt like a failure. The answer had been there all along — it was just obscured by the imperfect theory, rendered invisible by our small-minded brain. It’s not until we talk to a colleague or translate our idea into an analogy that we glimpse the meaning in our mistake.

Fascinating stuff. Make sure you read both stories if you’re into that sort of thing.

Coffee chemistry

Wired has a fascinating look at the chemicals at play in your daily espresso.

Here’s my favourite part of the chemical equation (if you thought caffeine you were wrong):

Trigonelline
Chemically, it’s a molecule of niacin with a methyl group attached. It breaks down into pyridines, which give coffee its sweet, earthy taste and also prevent the tooth-eating bacterium Streptococcus mutans from attaching to your teeth. Coffee fights the Cavity Creeps.

New Rules

Wired has a great little feature called New Rules for the Highly Evolved – it features contributions from Brad Pitt.

It’s a feature providing all sorts of tips for how to use social technology in a socially acceptable way. I’m sure there are some rules that I’m breaking. But here are my favourites.

There’s this graph on when it’s appropriate to reveal TV spoilers…

And these great little articles (there are more that I wasn’t really enamoured by…

  1. Don’t blog or tweet anything with more than half a million hits – I’m probably guilty as charged, though I see my blog as a repository of things I’ve found on the internet and while I care deeply about you, dear reader, I’m not worried if you’ve seen stuff before.

    “The things we forward, tweet, or post send a message about who we are,” Berger says. “And you don’t want the message to be that you’re behind the curve.”

  2. Delete stuff you don’t want on your wall from your online profiles – While I’m all for freedom of speech the thing that annoys me most (almost) is being misrepresented. I do enough damage to my personal branding on my own, without people sabotaging it.
    An example: people using my phone to send stupid SMS’s to girls I was interested in.
    You’re judged as much by your associations as by your actions so take heed of this advice:

    The only way out is to police your wall, even if that’s awkward. Don’t be shy about deleting untoward graffiti, eliminating your name from tagged photos, or even asking friends to remove incriminating pics that weren’t meant for public consumption. “You might damage a friendship,” Donath says, “but that’s one of the costs of the collapse of social circles.” Then again, you could migrate to MySpace. Nobody pays attention to anything written there.

  3. And lastly, the great social conundrum of our time – knowing which ringtone to choose – that won’t ever be a problem again thanks to this handy flow chart.

Free thinking

Andrew and I have continued to discuss the implications of my “open source” Christian music idea.

Clearly both sides of the argument contain truths – particularly when applied to Christian music. Songwriters want their ideas spread as widely as possible, while they also need to be paid to write if they do it full time. There’s another paradigm to consider when it comes to whether or not God “owns” work produced through spiritual gifts. Then he’d own the intellectual property, and the copyright.

It’s part of a much bigger and broader argument about open source that’s going on in the upper echelons of thoughtful journalism – and a lot of the discussion is about the future of journalism and paid media in the context of the free media offered by the web.

Malcolm Gladwell – one of my favourite authors is engaged in a debate with Wired Magazine editor, and author of a book called “Free”, Chris Anderson.

Anderson wrote his book on the premise that “ideas and information” want to be “free”… that’s a nutshell summary.

Here’s Anderson’s take on music and the Internet as quoted in Gladwell’s review of the book (which was negative)…

“In the digital realm you can try to keep Free at bay with laws and locks, but eventually the force of economic gravity will win.” To musicians who believe that their music is being pirated, Anderson is blunt. They should stop complaining, and capitalize on the added exposure that piracy provides by making money through touring, merchandise sales, and “yes, the sale of some of [their] music to people who still want CDs or prefer to buy their music online.”

It’s a great article. Here’s another interesting passage from Anderson’s book, again quoted by Gladwell…

“Anderson describes an experiment conducted by the M.I.T. behavioral economist Dan Ariely, the author of “Predictably Irrational.” Ariely offered a group of subjects a choice between two kinds of chocolate—Hershey’s Kisses, for one cent, and Lindt truffles, for fifteen cents. Three-quarters of the subjects chose the truffles. Then he redid the experiment, reducing the price of both chocolates by one cent. The Kisses were now free. What happened? The order of preference was reversed. Sixty-nine per cent of the subjects chose the Kisses. The price difference between the two chocolates was exactly the same, but that magic word “free” has the power to create a consumer stampede. Amazon has had the same experience with its offer of free shipping for orders over twenty-five dollars. The idea is to induce you to buy a second book, if your first book comes in at less than the twenty-five-dollar threshold. And that’s exactly what it does. In France, however, the offer was mistakenly set at the equivalent of twenty cents—and consumers didn’t buy the second book. “From the consumer’s perspective, there is a huge difference between cheap and free,” Anderson writes. “Give a product away, and it can go viral. Charge a single cent for it and you’re in an entirely different business. . . . The truth is that zero is one market and any other price is another.”

Gladwell’s critique cites YouTube as an example.

“Why is that? Because of the very principles of Free that Anderson so energetically celebrates. When you let people upload and download as many videos as they want, lots of them will take you up on the offer. That’s the magic of Free psychology: an estimated seventy-five billion videos will be served up by YouTube this year. Although the magic of Free technology means that the cost of serving up each video is “close enough to free to round down,” “close enough to free” multiplied by seventy-five billion is still a very large number. A recent report by Credit Suisse estimates that YouTube’s bandwidth costs in 2009 will be three hundred and sixty million dollars. In the case of YouTube, the effects of technological Free and psychological Free work against each other.”

Chris Anderson has since responded to Gladwell’s criticism on his blog. He uses blogging and bloggers getting book deals as a case study. Interesting stuff and worth a read. Seth Godin – the “guru” – has chimed in on the subject declaring Anderson right and Gladwell wrong. The Times Online’s tech blog predictably took the side of established journalism and declared Gladwell the winner.

Fun with science

Squirt your (ex-)friends in the face with this “now that you mention it, it’s obvious” practical joke from Wired, it’s harnessing the awesome powers of science…

Geek checklist

Continuing the vein of discussion about whether I’m a geek or a nerd (and in fact whether the distinction is necessary) – here’s a list of ten habits of a geek spouse from Wired. And here’s how I fare…
1. Punning.
Guilty as charged. Really, really guilty. I had no idea that this was a geek thing. 1 point.
2. Swearing in Klingon.
Nope. Not interested. Not really interested in sci-fi – but that doesn’t stop me wearing my Star Wars inspired “Milk I am your Father” shirt. 0 points.
3. Weird or over the top ways of celebrating mainstream holidays.
Not that I can think of off the top of my head. I do however celebrate federal budget night with an annual beer and budget celebration featuring only myself (and my wife who is there in presence not spirit). I’ll give myself half a point.
4. Dissecting movies.
I’m not really a movie geek/film buff. In fact I like really stupid movies that would no doubt annoy those who are film geeks. I certainly don’t point out continuity errors or any time a movie breaks natural law. So no points.
5. Wearing obscure geeky t-shirts to “normal places”.
Well yes, I do that. Lots. It shows just how clever you are. If you understand them. It’s like an idiot filter. 1 point.

6. Requiring extra space in the house for geeky things.
Yes. I have a coffee machine that’s more than a metre wide. I have a breadmaker set up on the back patio for roasting coffee, and I have four archaic consoles sitting in our TV unit. 1 point.
7. Geeky toys/decorations can be hard to explain to kids.
Well I don’t have kids. But I can’t imagine explaining why I own a plastic Bob Hawke drink dispenser will be easy. 1 point.
8. Looking up information while a discussion/argument is still in progress.
In the internet age who doesn’t do this? Really? Maybe it is just me. Very, very guilty. Especially when I know I’m right and I’m just doing it to back up my argument. 1 point.
9. Needing to watch certain TV shows ASAP to avoid spoilers.
Well, I actively seek out spoilers at times – just to stay ahead of the curve. But there are times when I guess this could be true. 1/2 a point.
10. Geeky projects that take over the house and whole weekends.
I guess ripping apart a breadmaker to install a switch bypassing the circuit board is pretty geeky. I like little DIY challenges – like the restoration of my coffee machine. 1 point.

Things aren’t looking so good. Lets count up those points. Drum roll.

6 7 out of 10 by my count. I guess that makes me an annoying geek spouse.

Nerd theology

Kevin Kelly is the founder of Wired Magazine. As far as nerds go, he’s pretty cool. He wrote an interesting little piece on “nerd theology

It’s worth a read if you’re a nerd, and at all interested in God.

His basic premise is this:

“We investigate the nature of intelligence, not by probing human heads, but by creating artificial intelligences. We seek truth not in what we find, but in what we can create. We have become mini-gods. And thus we seek God by creating gods.”

If you’re not a nerd – or at all interested I’d stop reading there – because this gets kind of confusing. But is also kind of interesting…

In nerd terms, god is a being function. We could write it like this:

Let g (god)=? s (initial nothing state) -> sl (something state)
Or g=? s -> sl

Now the universe we humans occupy is sl We are inhabitants of the something state produced by some god function. Christians like myself see a recursive nature in God. God (g), the creator created humans in his image, and so we too are creators. We can be designated as gl .

By means of our technology, we are becoming derivative gods ourselves. We are making our own tiny somethings out of nothing. True, our nothings are not as nothing as the nothing we came from, but we are getting better at starting from scratch, and producing more elaborative creations once we start creating. Our godhood could be described like this:

gl = ? sl -> s2

That is, we derivative gods began in a made world and created a second-order something.
Those somethings might have once been astoundingly realistic paintings, or perhaps a marble statue of a hero, or more recently a VR world crowded with fantastical creatures.

Someday, not too far away, we will create a creature (a robot) with its own mind (yes, a different mind) and its own free will that is capable of taking the next step and creating its own creation. In other words our little man will, like us, make its own little man or its own made-up world.