The context for learning, education and the arts (5)
Wednesday, June 21, 2006 at 3:28PM
Ron Burnett in Arts Education, Design, Education, Interactivity, Learning Communities, learning

(Please refer to the previous four entries for this article. (One, Two, Three, Four, Five)

My point here is that although computers are designed by humans, programmed by humans and then used by humans, this tells us only part of the story. The various dimensions of the experience are not reducible to one of the above instances nor to the sum total of what they suggest about computer-human interaction. Instead, most of what makes up the interaction is not predictable, is full of potential errors of translation and action and is not governed by simple rules of behaviour.

Smith puts it well: “…what was required was a sense of identity that would support dynamic, on-the-fly problem-specific or task-specific differentiation — including differentiation according to distinctions that had not even been imagined at a prior, safe, detached, “design time. (Smith: 41)

“Computational structures cannot be designed in anticipation of everything that will be done with them. This crucial point can be used to explain if not illustrate the rather supple nature of machine-human relations. As well, it can be used to explain the extraordinary number of variables which simultaneously make it possible to design a program and not know what will be done with it.

Another example of this richness at work comes from the gaming community (which is different from the video game community). There are tens of thousands of people playing a variety of games over the internet. Briefly, the games are designed with very specific parameters in mind. But what gamers are discovering is that people are grouping themselves together in clans to play the games in order to win. These clans are finding new ways of controlling the games and rewriting the rules to their own specifications thereby alienating many of the players. In one instance, in response to one such sequence of events, a counter-group got together and tried to create some semblance of governance to control the direction in which the game was headed. After some months the governing council that had been formed grew more and fascistic and set inordinately strict rules for everyone. The designer of the game quit in despair.

This example illustrates the gap, the necessary gap between the “representational data structure (Smith: 43) that initially set up the parameters of the game and the variables that were introduced by the participants. But it also points out the limitations of the design process, limitations that cannot be overcome by increasingly complex levels of design. This is in other words a problem of representation. How can code be written at a level that will be able to anticipate use? The answer is, that for the most part, with great difficulty. It is our cultural investment in the power of the computer that both enhances and changes the coding and the use. We have thus not become extensions of the machine but have acted in concert with it, much as we might with another human being. This is hybridity and it suggests that technology and the practical use to which we put technology always exceeds the intentional structures that we build into it.

It is within and through this excess that we learn. It is because of this excess that we are able to negotiate a relationship with the technologies that make up our environment. And it is the wonder, the freshness, the unpredicability of the negotiation process that leads us to unanticipated results, such as, for example, Deep Blue actually beating Kasparov!

Article originally appeared on Ron Burnett (http://rburnett.ecuad.ca/).
See website for complete article licensing information.