Recent Entries
This form does not yet contain any fields.
    « The context for learning, education and the arts (5) | Main | The context for learning, education and the arts (3) »

    The context for learning, education and the arts (4)

    (This entry is in five parts) One, Two, Three, Four, Five)

    So why explore the intersections of human thought and computer programming? My tentative answer would be that we have not understood the breadth and depth of the relationships that we develop with machines. Human culture is defined by its on-going struggle with tools and implements, continuously finding ways of improving both the functionality of technology and its potential integration into everyday life. Computer programming may well be one of the most sophisticated artificial languages which our culture has ever constructed, but this does not mean that we have lost control of the process.

    The problem is that we don’t recognize the symbiosis, the synergistic entanglement of subjectivity and machine, or if we do, it is through the lens of otherness as if our culture is neither the progenitor nor really in control of its own inventions. These questions have been explored in great detail by Bruno Latour and I would reference his articles in “Common Knowledge as well as his most recent book entitled, Aramis or The Love of Technology. There are further and even more complex entanglements here related to our views of science and invention, creativity and nature. Suffice to say, that there could be no greater simplification than the one which claims that we have become the machine or that machines are extensions of our bodies and our identities. The struggle to understand identity involves all aspects of experience and it is precisely the complexity of that struggle, its very unpredictability, which keeps our culture producing ever more complex technologies and which keeps the questions about technology so much in the forefront of everyday life.

    It is useful to know that the within the field of artificial intelligence (AI) there are divisions between researchers who are trying to build large databases of “common sense in an effort to create programming that will anticipate human action, behaviour and responses to a variety of complex situations and researchers who are known as computational phenomenologists . “Pivotal to the computational phenomenologists position has been their understanding of common sense as a negotiated process as opposed to a huge database of facts, rules or schemata."(Warren Sack)

    So even within the field of AI itself there is little agreement as to how the mind works, or how body and mind are parts of a more complex, holistic process which may not have a finite systemic character. The desire however to create the technology for artificial intelligence is rooted in generalized views of human intelligence, generalizations which don’t pivot on culturally specific questions of ethnicity, class or gender. The assumption that the creation of technology is not constrained by the boundaries of cultural difference is a major problem since it proposes a neutral register for the user as well. I must stress that these problems are endemic to discussions of the history of technology. Part of the reason for this is that machines are viewed not so much as mediators, but as tools — not as integral parts of human experience, but as artifacts whose status as objects enframes their potential use.

    Computers, though, play a role in their use. They are not simply instruments because so much has in fact been done to them in order to provide them with the power to act their role. What we more likely have here are hybrids, a term coined by Bruno Latour to describe the complexity of interaction and use that is generated by machine-human relationships.

    Another way of understanding this debate is to dig even more deeply into our assumptions about computer programming. I will briefly deal with this area before moving on to an explanation of why these arguments are crucial for educators as well as artists and for the creators and users of technology.

    Generally, we think of computer programs as codes with rules that produce certain results and practices. Thus, the word processing program I am presently using has been built to ensure that I can use it to create sentences and paragraphs, to in other words write. The program has a wide array of functions that can recognize errors of spelling and grammar, create lists and draw objects. But, we do have to ask ourselves whether the program was designed to have an impact on my writing style. Programmers would claim that they have simply coded in as many of the characteristics of grammar as they could without overwhelming the functioning of the program itself. They would also claim that the program does not set limits to the infinite number of sentences that can be created by writers.

    However, the situation is more complex than this and is also subject to many more constraints than initially seems to be the case. For example, we have to draw distinctions between programs and what Brian Cantwell Smith describes as “process or computation to which that program gives rise upon being executed and [the] often external domain or subject matter that the computation is about. (Smith, On the Origin of Objects, Cambridge: MIT Press, 1998: 33) The key point here is that program and process are not static, but are dynamic, if not contingent. Thus we can describe the word processor as part of a continuum leading from computation to language to expression to communication to interpretation. Even this does not address the complexity of relations among all of these processes and the various levels of meaning within each.

    To be continued........


    Reader Comments (2)

    I've read a little of Donna Haraway's take on human-computer symbiosis and it's interesting that she also writes about the "artificiality" of pet ownership and how certain kind of dogs have emerged through generations of dependence on relationships with humanity. She described a certain breed that actually can't reproduce without human intervention. Technologies are like this as well. They need some kind of "liveliness" in order to persist. There was a posting today at Eyebeam in New York about a project that involves the creation of electronic air quality sensors which are network-enabled and incorporate a form of instant messaging to allow the sensors to be a point of communication for distributed communities interested in monitoring air quality in New York. It's a brilliant mix of two technologies. The result is a system that involves cheap sensor boards, roaming users holding the units and remote web users who monitor the overall system and provide feedback to those on the ground. The project's creators describe these design choices as an attempt to keep the system alive.

    The question of the role of an artist in a technological environment is one I feel like I am right in the middle of, but then so is everyone else whether they like it or not. If I paint a painting after listening to the radio, I have been affected by a global communications network. The challenge at this point is how to prevent us as creators from settling into the role of passive consumers or receivers of information/culture/ideas and how to encourage agency, dialog, creation and action.

    I have been at the World Urban Forum now for two days volunteering and I am just about completely saturated with good vibes and positivity, but I have yet to make that critical step from talk to action on anything. At least I'm writing.

    In the realm of computer programming there are so many resources for artists to get started. MAX/MSP and Lego Mindstorms are the most visually-oriented ways of approaching programming. Proccessing ( is also a good learning environment, while creating Flash-based software is much less intuitive, but will run on almost any web browser. StarLogo is an excellent simple environment for learning about programming parallel systems (this is similar to what Joshua Davis talked about at VIDFest).

    I like Sack's thought about negotiation. I'm also starting to really wonder about the notion of what is artificial and what isn't. As software gets more sophisticated and more connected, I think Turing's notion of what constitutes a machine and what is human is more interesting than just accepting technology as a part of what it means to be human and start thinking more about what it means to be conscious and to have a free will. This makes me want to wander off into "Blade Runner" territory...

    Smith's ideas about how there is nothing particularly digital about computers are an incredibly liberating departure point for thinking about art and technology. He describes a model for looking at digital systems with three layers: the user (higher level), signals (middle level), physical (lower level). At the level of the user, we have screens and speakers and motors and all the outputs of the digital universe. All this stuff is actually analog. It is connected to all of what we know about experiencing and thinking about the universe from the last X thousand years of human consiousness (more, if you're one of those people who believe in science), the lower level is the level of the matter of computing -- the silicon and very analog materials that make the hard, real stuff that computers are made from. In the middle is where we find this thing that we all know as "digital". It is the idea of discrete on/off signals that are used to allow us to make clean mathematical calculations and predictable, discrete (in the sense of systems with sharp edges) things. This thing that is "digital", however isn't as sharp and discrete as we think it is. Smith uses a diagram of a square "ideal" digital signal and compares it to the actual digital signals found in most computers and suggests that the difference between real and ideal is considerable at this middle conceptual layer in computers. As a means for thinking about digitality, it is quite nice to be let go of the idea that there is anything digital about "digital".

    Of course, if you tell anyone about this revelation that computer have a fundamental difference between their "ideal" and their "real" and they'll laugh and say "you just figured that out now?"
    June 20, 2006 | Unregistered CommenterIan Wojtowicz
    Adam Lowe is an artist who visualizes what Brian Cantwell Smith is talking about
    June 20, 2006 | Unregistered CommenterIan Wojtowicz

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>