Wednesday, October 14, 2015

Damage, by David D. Levine, January 21, 2015; 7,465 words
Rating: 2, Not recommended.  Recommended By:    SFEP

An intelligent, patchwork fighter ship struggles to help the last of the Free Belt fight off the Earth Force.

Mini-Review (click to view--possible spoilers)

Pro: There is a decent rationalization of having an AI that feels pain and fear.

Con: There are a lot of problems that make it hard to sustain suspension of disbelief in this story.

Exactly what good is this AI? It seems to spend all its time worrying while the human pilot does all the work. Knowing that the survivors don't even believe in their own cause makes things like the attack on the Tanganyika painful to read.

General Geary is so evil he's impossible to believe.

It's very naive to imagine that you can merge two computers into a single new one--even a "fantasy AI" that feels pain and fear and loves its pilot. A low point is when the AI tries to multiply grief by a thousand and finds its co-processor can't do it. Ultimately, a story that depends on a computer overcoming its programming is like a story that depends on a locomotive learning how to leave the tracks and drive down the freeway.

It's impossible for an asteroid to reach Earth from the asteroid belt in just 82 hours. Even 82 days would be pushing it.

4 comments (may contain spoilers):

  1. I agree. I didn't understand why they needed an AI. The ship didn't seem to need any computing abilities beyond what a 'dumb' computer like my iPhone could do, and the AI felt as if it was 'along for the ride' rather than doing anything. It seemed particularly strategically stupid to use an AI, given that the one in the story rebels due to its free will.

    The 'love for the pilot' felt overdone. I ended up feeling I was reading an account of an abusive fictional relationship, but with no wider narrative purpose. Yuck.

    1. The whole idea of designing an AI with feelings is so kooky I'm amazed that it keeps coming up. Imagine that your self-driving car said "Oh let's not go THERE again!" Or (just as bad) if it gushed with excitement no matter where you wanted to go.

      It might be funny for a few hours. After that, everyone would turn it off.

      I think the problem is that nearly all SF writers have no idea how AI works today, much less what it might do in the future, and lacking any better ideas, they just make the AI a human character. That, or else they make it completely mechanical. I should call this Hullender's Law:

      "In SF, an AI is either not artificial or not intelligent."

  2. Now that it's a Nebula Award finalist, there's in interesting discussion from a reviewer who liked the story.