I made a fun promo video for Ars Technica’s awesome science video contest, which you should enter. I visualized entropy using the oldest trick in the film-school student’s book: performing backwards and then reversing the footage in editing. No fancy equipment, just a plain ol’ camcorder and Final Cut Pro (which is expensive, but I think you could do all of this with iMovie or Final Cut Express too).

Continue reading ‘Behind the scenes of “A Short Film about Entropy”’

Fast Company’s design blog, Co.Design, let me go all Thinky McThinkersons in response to an essay on The Atlantic that was making the rounds earlier this week. That piece somewhat incoherently argued that because web design is mostly annoying, users are better off without it. Or something. In any case, I disagreed.

Obviously Dylan Tweney is a smart, techsavvy dude and not an idiot. (I think I pitched him once upon a time, and he was really nice about the fact that he had no budget for freelancers.) He just seems fed up with the state of web design, and sees content-friendly tools like Readability as a revolt against it. And I’m right there with him! But Readability is design, too. It’s better design — for a specific kind of use that more and more of us consider important. That’s what the web needs, not “more” or “less” or “un” design.

Anyway the best part of writing this article was that I had an excuse to interview some really cool people. [Hey Jason and Scott, let’s keep in touch, eh? just kidding. …no, I’m not.]

I wrote a fun item for the current issue of Wired about the company that creates “invisible effects” for HBO’s awesome looking, Martin-Scorsese-produced new series about Atlantic City bootleggers, Boardwalk Empire.

The show has feature-film-caliber production design, which literally wouldn’t have been possible on a TV budget without CG-ing a whole mess of it — otherwise, as series creator Terence Winter says, “we wouldn’t have had a boardwalk or an empire.”

Some interesting tidbits that didn’t make it into the article:

  • All the muzzle flashes from gunfire are CG’ed.
  • All of the blood-spatters (and there are a lot) are also CG’ed, because it’s easier to reset the period costumes for additional takes when you don’t have to sew up squib-holes and mop fake blood off them. (And in many cases, again for budget reasons, the wardrobe department didn’t have extra copies of the costumes to begin with!)
  • Scorsese is old-school, but really comfortable with this “fake” process — while setting up a shot in a boardwalk shop, he casually asked the CG guys (who were on set during the making of the pilot) to digitally move some signage in a window a little to the left “later” because he liked the framing better that way. (Again: much cheaper than having the art dept. spend an hour or two scouring off the paint and repainting it two feet to the left.)

Blockbuster-Quality Effects on a Small-Screen Budget | Wired

[As you can tell, I’m really digging this Dr Strangelove-style formatting for all of my post titles lately. Anyway]

So I just splashed some coffee all over my hardcover of The Thousand Autumns of Jacob de Zoet, a book on my nightstand that I’m alternating between devouring and slowly savoring.

  • First reaction: this
  • Second reaction: “This, while somewhat sucky, illustrates why I can’t see myself ever seriously getting on board with ebooks.”

Continue reading ‘Embodied content: or, Thoughts on ebooks after spilling coffee on a real book’

I’ve known about the “P versus NP problem” in computer science for a few years. It sits right at the intersection (which I love) of weird math, technology, and philosophy. So when a new “proof” of the conjecture bubbled up into the news, I leapt at the opportunity to write a story about it for Technology Review. The question I was curious to answer to was: What does “P vs NP” mean for the rest of us? If it were proven one way or another — a very unlikely prospect — how would it affect the daily business of computing, if at all?

I assumed that this was an “easy” angle to take. Boy was I wrong. They don’t call it “complexity theory” for nothing, after all. But while the technical details could choke a Vulcan, the distinctions between the various “species” of computational problems (e.g., “P,” “NP,” “NP-complete,” etc.) can seem quite intuitive at first. That’s what’s so intriguing about P-vs-NP: gobbledygook like this can actually map to very layman-friendly concepts, like playing Sudoku or arranging the seating assignments for a wedding reception.

And there’s the danger, of course. I’m used to trusting my own nose for an intuitive analogy. But when I finished my draft — and even after one of my sources vetted it for accuracy — my science-writer “Spidey sense” was still subtly tingling. Thank god Scott Aaronson was willing to double-check it, because he uncovered half a dozen subtly-wrong-but-still-just-plain-wrong characterizations of basic concepts. I corrected them, and then asked him to triple-check. He found a couple more errors. Everything worked out and my editor was pleased in the end, but it was a very valuable reminder: when feeling extra-curious about something, be extra-careful. (Captain Obvious? Yup. But sometimes that’s the kind of stuff I most easily forget.)

The bright side is that this reporting experience only makes me more eager to write about “P versus NP.” Maybe I’m a science-writing version of an adrenaline junkie: I feel like I barely made it out of this topic alive, but it’s just so damn interesting, and there’s so much more “dangerous territory” to explore and bring back the goods from, that I can’t wait to go back.

[Postscript: I forgot to mention the huge importance of having a good editor on this story, or any one like it. Instead of fantasizing about murdering me, Will Knight at Tech Review whipped my sorry excuse for a news lede into shape, worked late on several drafts, and actually thanked me in the end. A real mensch.]

What happens when a cultural critic applies her Derrida-ean apparatus to the volatile goings-on at a network of empiricist bloggers? Heffernangate!

Such was my takeaway from last night’s “beer summit” between Virginia Heffernan and a handful of science bloggers, organized by John Timmer from Ars Technica.

John rightly figured that getting a few folks together face-to-face would make for a more civil and interesting discussion of the de/merits of Heffernan’s article, compared to the swarm-of-flesh-eating-locusts manner in which many sciencebloggers initially reacted online. I tagged along because

  • I wanted to meet some science folks I follow on Twitter in real life
  • I had met Heffernan a few years before under similar circumstances (she wrote something I disagreed with; I flamed her online; she was open-minded enough to suggest a meet to discuss) and was curious to see if this meetup would have similarly collegial results
  • I was interested in the meta-topic at hand — is it possible for these two “tribes,” with their vastly different sets of working assumptions and critical worldviews, to teach each other anything?

Continue reading ‘Meta-encounters and non-overlapping magisteria (or, why Virginia Heffernan wrote what she wrote about ScienceBlogs)’

[This is a modified version of  a comment I wrote on Ed Yong’s “On the origins of science writers” blog thread, which is really fun and you should read all of it.]

My “about” text says I am curious for a living, but what does that actually mean? It’s not just a hand-wavey philosophical quip. It’s quite literally a summary of what I physically and mentally do each day at work. (Or try to, on less productive days.)

First a smidge of background: I fell into journalism by accident, and then into science journalism almost by default. I started writing movie reviews for my college paper because it was the only thing I knew anything about (I went to film school). Then that got boring so I decided to reorient towards reported nonfiction. But I still didn’t know anything about anything. After spending a couple years after graduation in Chicago writing about low-hanging fruit (myself, movies, and local events), I realized something: If I was ever going to really make a living by “being curious for a living,” I’d better get a grip on what the hell I tend to be the most curious about. Once I asked that question, “science” was the simple, obvious answer.

I don’t have a formal degree, or even a beat. Just plain old honest curiosity — built up from a childhood of watching NOVA and flipping through my Dad’s popular science books on quantum mechanics, cosmology, and chaos theory. It’s a powerful engine for a career and I feel lucky that I can rely on it. But it also has practical advantages that I call upon every day:

Continue reading ‘3 practical ways that “being curious” helps me be a more productive freelancer’

I wrote a fun new article for DVICE.com that’ll be sure to get the nerds into a tizzy. You’ve heard of The Singularity, right? It’s this idea that at some time in the near future (ie, within decades), computers will become more powerful than our brains. The “bad” version of this is that they start iterating and re-iterating themselves to become smarter and smarter, leave us in the dust, and enslave/kill/use us for batteries. The “good” version is that once computers have more information-processing power than human brains, we can all upload our minds into silicon and live forever, freed of our pesky bodies.

This is a bunch of bull honkey. Here’s why.

Also: they made a really snazzy header image for the article. If you want to see the header image I created and submitted (that got rejected), click through…

Continue reading ‘6 reasons why you’ll never upload your mind into a computer’

As one of my Twitter friends quipped, I have a love/hate relationship with Apple. I love their powerful, visionary computer products and will proclaim to the rafters that their intuitions about hardware/software experience are second to none. But I also flew into a weeklong rage about AntennaGate (even though I don’t own an iPhone), unable to prevent myself from writing some really nasty things about Steve Jobs.

Hm, cognitive dissonance much?

I needed a way to understand this. I needed a model to explain it to myself so I don’t feel like a hypocrite the next time I buy or recommend an Apple product and yet also feel justified when I fly off the handle about something they do that feels evil. I may have found it:

Continue reading ‘Making Apple make sense to myself: Steve Jobs isn’t Jesus, he’s a run-of-the-mill artist’

Every so often (but more and more often), you’ll see a film or short flying around the intertubes with everyone saying a) how cool it is and b) how it only cost [insert relatively small amount of money]. The Raven is the latest one I’ve heard about (via io9), and its supposedly dirt-cheap budget is blared right in the headline and first sentence of the article. Here’s the film (and it is very cool), made for a purported $5000:

Let’s get one thing straight: they may have spent “only” $5000 out-of-pocket, but to imply that that’s all the filmmakers had to scrape together to make something like this is just plain wrong, or at the very least, misleading.

Continue reading ‘The truth about that awesome short film that “only cost [X] bucks!”’


Get every new post delivered to your Inbox.

Join 29 other followers