1/29/10

The "Gospel" and False-Belief


Here's a standard form of "sharing the gospel" that is used by Christians. "If you were to die and stand before God and he were to ask, 'why should I let you in', what would you say." What if, I asked a group of 34 students primed by reading Clifford's "Ethics of Belief" and James' "Will to Believe", you answered "because I believe X" and X was the correct answer ", but God still refused you entrance. What would that say about your belief? Here's their results:

14 students thought that they poor reasons for their belief. (A Clifford-like response)
10 students thought their belief was not *really* believed, perhaps made invalid by their actions or self-deception. (a proto-Jamesian response)
4 students thought that God was either lying or testing them about their answer.
4 students thought that their belief was based on selfish desires, like getting into heaven.
2 students thought that their belief was insufficient and that they should have believed more than just X.


1/25/10

"Advanced Chess" and Cognitive Augmentation


Gary Kasparov, infamous for his 1980's matches with Deep Blue, IBM's chess machine, has designed a *new* way to play chess while simultaneously using a computer chess program. "Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine." Is this taking chess to "the highest level"? Or is it destroying the game? Read more in his New York Review of Books article.

1/23/10

Human Nature and Cognitive Augmentation


I've received some wonderful comments on William Cornwell's talk at the last CIPHER workshop, who argued that technologically advanced augmentation of our senses, bodies, and minds make us "more human".

"I do believe that chips implanted into someone’s neurophysiology wouldn’t make them more human, but rather less human. To have a machine do decision-making for us, affects all parts of the brain, and mechanizes us to the degree that our humanity is compromised. To some degree electronics augment us cognitively, but to a greater extent it hinders us and provides us with more trouble. (...) The stakes are high. Are you willing to put your life in the hands of a robot, even if it is controlled by “your” own brain? Augmentation cognition would alienate us from humanity. (...) We are not made to last pharmaceutically beyond 106 years old, and not only would a robot be essentially not human but it would outlive generations."

"But this is all reminding me of the Tower of Babel. Humans have this tendency to try to build themselves higher and higher, and to make themselves gods, improving upon everything they can. (...) Although it might seem mysterious and unreasonable to us, God doesn’t want us to all band together and become more and more powerful as a race. We can see this clearly from how he reacted to the building of Babel: “The Lord Said, ‘if as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. Come, let us go down and confuse their language so they will not understand each other” (Genesis 11:6). There is a line in my mind between using tools to make our lives easier and trying to improve upon ourselves as a species because we think what we are is inadequate."


1/8/10

Ethics of Simulation

Recent talk of the film Avater (2009) has brought up an interesting moral question I've been tossing around for a while now. What are the moral implications of digital simulation games involving death? While video games have, in the past, been linked to youth violence--and vice versa--I am wondering about he morality of the games themselves.

As computer generated imagery becomes more and more lifelike, I wonder at what point it becomes immoral to kill it. How does destroying my opponent's civilization in Age of Empires (an excellent, if dated, computer real-time strategy game) compare to the sack of Rome in 492 A.D.?

As a History major I am sensitive to the fact that one one level even asking this question is an affront to the people who lived through that event, but still, the question nags. I also realize that Artificial Intelligence may be far from a reality, but intelligence is not necessary for basic rights. Animals, and even plants, are granted rights--or at least their life is has value, if only Utilitarian in nature--so why not the unreal?

I am far from believing we have to worry about simulating beings revolting against their masters dwelling in the land of reality, but I think the morality of simulation, in concept, bears further inquiry.

1/4/10

Graduate School in the Humanities: A bust



The Chronicle has a good argument for working at the intersection of the humanities and the sciences, reporting that jobs for professors in the humanities have been going down the swizzle for some time. For those interested in the practical benefits of being a humanities scientist, check out HASTAC.

“Knowledge goes from my ears to my brain, not from my finger to my brain,”


Here's a
great article from NYT on the current state of braille use.

Many blind users seem to be forsaking standard tactile braille use for audio applications. Surprisingly only 10% of blind people make use of tactile braille. A stronger point is made by Laura Slote, blind from age six, “It’s an arcane means of communication, which for the most part should be abolished.” Apparently, tactile braille reading is slow, cumbersome (a Harry Potter book has over 1000 pages), and isolating.

The only arguments I've heard in support of widespread tactile braille use are from developmental advocates, who say that its use recruits needed areas from the visual cortex for normal brain functioning, as the article reports. But similar reports have been made about auditory stimulation activating the visual cortex.

In sum, the thought that people without sensory deficits are prescribing clunky devices for people with sensory deficits, when more user-friendly devices are in the offing, seems a bit ridiculous.


1/3/10

More Virtual Reality Ethics



Once again the new virtual reality world called Second Life ignites a flurry of ethical quandaries. Patrick Davidson's talk at Ignite (http://ignite.oreilly.com/2009/12/patrick-davison-and-the-plight-of-the-digital-chickens.html) about virtual chicken farming raises some interesting issues relating to virtual property ownership and anonymous social interactions. The main thrust of his talk, however, is that different people perceive the function of virtual spaces differently, and this difference in perception leads to conflict.

When dealing with digital ethics, it is important to keep in mind the different ways of perceiving digital reality. To some it is a new way of being, to others it is merely a tool, to others still it is just a game. When conflicts of digital ethics arise, it is often the case, but rarely noticed, that the conflict stems from a fundamentally different view of digital reality.