Thursday, August 27, 2009

Welcome to the R1A.002: Good Old-Fashioned Futures Class Blog!

(Photo: The World's First Toaster Printer)

This is our class's electronic sounding room... a place to toss out ideas, contribute comments, ask questions, and get some insight into what other people are thinking about our explorations in new media. Like our physical classroom, the goodfutures blog should be a space of productive critique and inquiry. We ask that you always engage each other in courteous and thoughtful ways, while practicing your most interesting writing. Embrace language, the concepts discussed in class and the readings, and see where they take you!

If you're feeling lost, chances are you're not the only one. Feel free to use this space to ask your classmates and your teachers their opinions. Great conversations often start with a willingness to be confused.

Blog on!

9 comments:

  1. Yes! I've got the first comment!
    Hi Alenda and classmates!

    ReplyDelete
  2. This is Oleg Sapunkov. I just preferred to keep my name out of the blog.

    ReplyDelete
  3. My initial inclination, regarding commentary on this course material, is to express my bewilderment over some of the reactions many people have had on the ultramodern age. These people are often characterized by their fear, sometimes mild and sometimes extreme, of machines and computers. Some have even presented apocalyptic scenarios in which robots and machine exterminate the human race in order to replace it with themselves. Are these ideas plausible, or just delusive? Is there any truth to these concepts?

    One common argument from doomsayers is to turn the facts of computers in our current technological status around. The argument often goes as "While computers are slaves to human will, our dependence on them have made us slaves to them." This statement is simply not true. Computers cannot at one moment *decide* to stop being slaves (slaves, in more technical terms, being overwhelmingly fast calculators with enormous memory banks) by shutting down and not performing their assigned tasks. Computers can, occasionally, stop working due to technical errors, but they can not make the conscious decision to not do their duties. Thus, computers cannot suddenly choose to revolutionize the world and expose our dependence on them; and even if they could, what is going to stop us from shutting down their precious energy supply?

    One interesting concept I have encountered is that computers, often robots, would evolve and as a subsequent set of events cause the annihilation of our race. It is true that one may simulate evolution on computers; great progress has been made in these technological areas. In the cases that evolution is simulated on a robot, one is often a little careful to not implement a way for the robot to replicate itself. In order to evolve, robots would have to replicate themselves and in their current state, the prospects of implementing such a feature is at the moment very distant.

    A third apocalyptic scenario is the idea that the world may be turned into objects whose total entity is often known as "Grey Goo" (coined by Eric Drexler in Books of Creation). This involves nanoscale robots gone wrong which turn all matter on earth into more of themselves. Are you getting very concerned and fearful that your entire body may be turned into million tiny robots? Don't be! Nature has already completed this scenario and presented a fundamental type of life on earth as we know it: Bacteria and viruses. These entities both have a lot of the properties of grey goo; yet they haven't completely obliterated the world. The reason is simple; there is not enough resources for it to happen, and if there is, there is not enough energy to utilize them. What nature perfected over hundreds of millions of years of evolution is extremely inplausible to be superseded by a human creation.

    There are several other similar concepts being spread in social networks today, but as I expressed, it should not really be a concern for people. Machines don't even have the potential to turn into mass-murdering machines.

    ReplyDelete
  4. Sorry! I made a citation error in the post above - I meant Engines of Creation by Eric Drexler, not Books of Creation.

    ReplyDelete
  5. Having fear of too much dependency on technology is a legitimate fear....having fear of technology becoming self aware and deciding to take over the world is obviously not a legitimate fear. I highly doubt any reasonable person could truly believe that this idea is anything more than a science-ficiton tale. As you mentioned before though, technology does on occasion go haywire and this problem is a legitimate concern on the dependency of technology in our world today.

    ReplyDelete
  6. JSJ90... who are you? If you use an alias, just make sure you include your name in the post so we know who you are!

    ReplyDelete
  7. Jonah and JSJ90... you bring up some great points. I wish we had more time to delve into our culture's technophobic (technology-fearing) and technophilic (technology-loving) imagination by reading more science fiction and cyberpunk, but alas, we can't do everything. For those interested in alife (artificial life), read Christopher Langton. For those interested in nanotechnology, try reading Neal Stephenson's The Diamond Age.

    I agree that apocalyptic accounts of technology are rarely well-founded in the actual sciences of robotics or artificial intelligence (both sciences that have languished since the mid-century in favor of biotechnology). People tend to overlook the fact that technology is inherently fallible.

    Nowadays, the apocalyptic scenarios are less about aliens or robots than about global warming or ecological disaster, but maybe I'm wrong... just look at what's coming up in theaters now: "9" and "Gamer," to name a few.

    ReplyDelete
  8. Sorry. This is Justin Japinga. I'll be sure you sign the end of each of my posts.

    ReplyDelete
  9. Hi this comment might be inappropriate, so apologies in advance. I have dropped the class but I have already bought the reader, so if anyone wants to buy it off me, please email me at pcan@berkeley.edu. $11. Only few lines highlighted.

    ReplyDelete