Friday, July 12, 2019

Flowers on My Face, by Geo-Il Bok

[Clarkesworld]
★★★☆☆ Honorable Mention

(Robot SF) On Ganymede, a group of robots working to repair the colony find time to experiment with trying to act like humans (6,759 words; Time: 22m)


"," by (translated by Elisa Sinn and Justin Howe, edited by Neil Clarke), appeared in issue 154, published on .

Mini-Review (click to view--possible spoilers)

Review: 2019.397 (A Word for Authors)

Pro: Although Jimmy is nominally a robot, for all practical purposes, he thinks and acts like a human being. He’s suffering depression over the loss of all the humans who died in the disaster, and this story is mainly about him finding a reason to care again.

On a technical note, I liked the idea that robots were designed to have synergy with humans, so although the reconstruction project continues without humans around, it goes much more slowly.

Con: It makes little sense to create emotional robots, particularly if they’re vulnerable to depression. And why aren’t they at least in communication with people on Earth, even if Earth doesn’t want to repopulate the Jovian system?

Other Reviews: Search Web
Geo-Il Bok Info: Interviews, Websites, ISFDB, FreeSFOnline

Follow RSR on Twitter, Facebook, RSS, or E-mail.

2 comments (may contain spoilers):

  1. (to the reviewer):
    I noticed that you explained a personal dislike for emotions ai on your sci-fi, but I'd challenge your remark about it being unrealistic. Given that the story doesn't explain the origin of the robots, I can think of many reasons they would be designed with emotion. Perhaps rather than very advanced roombas, the robots originated from programs designed to solve captchas? Or perhaps they were from general practitioners? Both of those are examples of fields where programs are being designed to think or act with humanity. It would be no great stretch to assume that, given a hundred years or so of advancement, the programs in those fields would end up prone to depression.
    I do agree with the doubt about the robots not contacting the humans. I guess its possible that they just didn't have the innovation to invent a means of communication, but I share your doubt.

    ReplyDelete
    Replies
    1. If you go very far into the future (hundreds if not thousands of years), then, of course, we can't say much about what sort of technology the robots might use. However, because they are things we make, not things that evolve, it's fair to assume that no one is going to design in flaws, and depression in a robot is just a malfunction. You might tolerate that as a side-effect of something more desirable, but it's hard to think what that might be.

      As for emotion in general, I like to point to the example of the self-driving car that sometimes tells you it doesn't feel like going anywhere. Or which gets excited about going somewhere the owner isn't interested in. This sort of behavior would just be detrimental, and no one would build such a thing on purpose (or buy one).

      I think the place the problem comes in is when readers (or authors) start of by assuming the AI is just a human being inside a machine. Then they immediately ascribe all sorts of human behavior to something that really isn't going to be like that. But any AI built with anything remotely like the technologies we know about today isn't going to have feelings or wants or desires. It's just going to do what it was programmed to do.

      That said, I can certainly see reasons to program software to fake it. The computer virus that begs you not to delete it might be formidable.

      Delete