Search
Recent Entries
Twitter
Responses
  • Contact Me

    This form will allow you to send a secure email to the owner of this page. Your email address is not logged by this system, but will be attached to the message that is forwarded from this page.
  • Your Name *
  • Your Email *
  • Subject *
  • Message *

Entries in Education (50)

Sunday
Aug272006

Hurricane Katrina

The Sunday New York Times Magazine of August 27th has a poignant and profoundly disturbing article and photo essay on the aftermath of Hurricane Katrina with particular emphasis on what happened to the children of the families that were displaced by the storm. The images are very powerful and the reality of what happened, the incompetence of the recovery effort and the lack of attention to the families struggling to remake their lives is disturbing and shocking.

The images made me angry, but also made me feel quite hopeless. Journalists have still not learned that shocking images achieve their effect, but little else. The personal stories of the children were heartbreaking and I felt the need to do something, but other then sending money to the relief effort or writing this short comment, my options remain limited.

This is indeed the challenge of the next few years. How can the information we receive be translated from our personal experiences into action? I have no pat answers to this question. A disturbing pattern has emerged over the last decade or so. More information has not led to more knowledge. Instead, it has led to increasing and sometimes deadly tribal activity. These tribes range from small groups to larger ones, but their common characteristic is a lack of direct response to crucial issues. Their worlds are centered on their own and sometimes parochial concerns. Globalization, it seems, is actually returning us to a more medieval practice of village life, the only difference being that today's villages are not constrained by national boundaries.

If each village were to become the center of new and imaginative activities directed toward social change and equity, then there would indeed be opportunities to support people in need wth a more determined effect and impact than is presently possible. I will discuss this issue in greater depth over the next few weeks in an expansion of earlier posts on communities within the context of what has now become an image-world — old definitions will have to change.

This short piece is dedicated to Michael Merovitz, a very old friend who died recently — a gentle, sweet and wonderful man whose premature death is a deep loss.

Wednesday
Jun212006

The context for learning, education and the arts (5)

(Please refer to the previous four entries for this article. (One, Two, Three, Four, Five)

My point here is that although computers are designed by humans, programmed by humans and then used by humans, this tells us only part of the story. The various dimensions of the experience are not reducible to one of the above instances nor to the sum total of what they suggest about computer-human interaction. Instead, most of what makes up the interaction is not predictable, is full of potential errors of translation and action and is not governed by simple rules of behaviour.

Smith puts it well: “…what was required was a sense of identity that would support dynamic, on-the-fly problem-specific or task-specific differentiation — including differentiation according to distinctions that had not even been imagined at a prior, safe, detached, “design time. (Smith: 41)

“Computational structures cannot be designed in anticipation of everything that will be done with them. This crucial point can be used to explain if not illustrate the rather supple nature of machine-human relations. As well, it can be used to explain the extraordinary number of variables which simultaneously make it possible to design a program and not know what will be done with it.

Another example of this richness at work comes from the gaming community (which is different from the video game community). There are tens of thousands of people playing a variety of games over the internet. Briefly, the games are designed with very specific parameters in mind. But what gamers are discovering is that people are grouping themselves together in clans to play the games in order to win. These clans are finding new ways of controlling the games and rewriting the rules to their own specifications thereby alienating many of the players. In one instance, in response to one such sequence of events, a counter-group got together and tried to create some semblance of governance to control the direction in which the game was headed. After some months the governing council that had been formed grew more and fascistic and set inordinately strict rules for everyone. The designer of the game quit in despair.

This example illustrates the gap, the necessary gap between the “representational data structure (Smith: 43) that initially set up the parameters of the game and the variables that were introduced by the participants. But it also points out the limitations of the design process, limitations that cannot be overcome by increasingly complex levels of design. This is in other words a problem of representation. How can code be written at a level that will be able to anticipate use? The answer is, that for the most part, with great difficulty. It is our cultural investment in the power of the computer that both enhances and changes the coding and the use. We have thus not become extensions of the machine but have acted in concert with it, much as we might with another human being. This is hybridity and it suggests that technology and the practical use to which we put technology always exceeds the intentional structures that we build into it.

It is within and through this excess that we learn. It is because of this excess that we are able to negotiate a relationship with the technologies that make up our environment. And it is the wonder, the freshness, the unpredicability of the negotiation process that leads us to unanticipated results, such as, for example, Deep Blue actually beating Kasparov!

Tuesday
Jun202006

The context for learning, education and the arts (4)

(This entry is in five parts) One, Two, Three, Four, Five)

So why explore the intersections of human thought and computer programming? My tentative answer would be that we have not understood the breadth and depth of the relationships that we develop with machines. Human culture is defined by its on-going struggle with tools and implements, continuously finding ways of improving both the functionality of technology and its potential integration into everyday life. Computer programming may well be one of the most sophisticated artificial languages which our culture has ever constructed, but this does not mean that we have lost control of the process.

The problem is that we don’t recognize the symbiosis, the synergistic entanglement of subjectivity and machine, or if we do, it is through the lens of otherness as if our culture is neither the progenitor nor really in control of its own inventions. These questions have been explored in great detail by Bruno Latour and I would reference his articles in “Common Knowledge as well as his most recent book entitled, Aramis or The Love of Technology. There are further and even more complex entanglements here related to our views of science and invention, creativity and nature. Suffice to say, that there could be no greater simplification than the one which claims that we have become the machine or that machines are extensions of our bodies and our identities. The struggle to understand identity involves all aspects of experience and it is precisely the complexity of that struggle, its very unpredictability, which keeps our culture producing ever more complex technologies and which keeps the questions about technology so much in the forefront of everyday life.

It is useful to know that the within the field of artificial intelligence (AI) there are divisions between researchers who are trying to build large databases of “common sense in an effort to create programming that will anticipate human action, behaviour and responses to a variety of complex situations and researchers who are known as computational phenomenologists . “Pivotal to the computational phenomenologists position has been their understanding of common sense as a negotiated process as opposed to a huge database of facts, rules or schemata."(Warren Sack)

So even within the field of AI itself there is little agreement as to how the mind works, or how body and mind are parts of a more complex, holistic process which may not have a finite systemic character. The desire however to create the technology for artificial intelligence is rooted in generalized views of human intelligence, generalizations which don’t pivot on culturally specific questions of ethnicity, class or gender. The assumption that the creation of technology is not constrained by the boundaries of cultural difference is a major problem since it proposes a neutral register for the user as well. I must stress that these problems are endemic to discussions of the history of technology. Part of the reason for this is that machines are viewed not so much as mediators, but as tools — not as integral parts of human experience, but as artifacts whose status as objects enframes their potential use.

Computers, though, play a role in their use. They are not simply instruments because so much has in fact been done to them in order to provide them with the power to act their role. What we more likely have here are hybrids, a term coined by Bruno Latour to describe the complexity of interaction and use that is generated by machine-human relationships.

Another way of understanding this debate is to dig even more deeply into our assumptions about computer programming. I will briefly deal with this area before moving on to an explanation of why these arguments are crucial for educators as well as artists and for the creators and users of technology.

Generally, we think of computer programs as codes with rules that produce certain results and practices. Thus, the word processing program I am presently using has been built to ensure that I can use it to create sentences and paragraphs, to in other words write. The program has a wide array of functions that can recognize errors of spelling and grammar, create lists and draw objects. But, we do have to ask ourselves whether the program was designed to have an impact on my writing style. Programmers would claim that they have simply coded in as many of the characteristics of grammar as they could without overwhelming the functioning of the program itself. They would also claim that the program does not set limits to the infinite number of sentences that can be created by writers.

However, the situation is more complex than this and is also subject to many more constraints than initially seems to be the case. For example, we have to draw distinctions between programs and what Brian Cantwell Smith describes as “process or computation to which that program gives rise upon being executed and [the] often external domain or subject matter that the computation is about. (Smith, On the Origin of Objects, Cambridge: MIT Press, 1998: 33) The key point here is that program and process are not static, but are dynamic, if not contingent. Thus we can describe the word processor as part of a continuum leading from computation to language to expression to communication to interpretation. Even this does not address the complexity of relations among all of these processes and the various levels of meaning within each.

To be continued........

 

Monday
Jun192006

The context for learning, education and the arts (3)

This Entry is in Five Parts. (One, Two, Three, Four, Five)

This initial creativity was soon lost in the final version of “Understanding Media published in the 1964. In the book the medium becomes the message through the operations of an instantaneous sensory recognition of meaning. McLuhan explores affect by claiming that cubism in its elimination of point of view, generated an “instant total awareness [and in so doing] announced that the medium is the message? (Marshall McLuhan, Understanding Media, (Cambridge: MIT Press, 1994, p.13.) I am not sure what ‘instant total awareness’ is, but one can surmise that it is somewhere between recognition and self-reflexive thought. In choosing this rather haphazard approach McLuhan eliminates all of the mediators that make any form of communication work.

Take the World Wide Web as an example. Few users of the web are aware of the various hubs and routers that move data around at high speed, let alone of the complexity of the servers that route that data into their home or business computers. They become aware of the mediators when there is a breakdown, or when the system gums up. The notion that we receive information instantly is tied up with the elimination of mediation. So, the arrival in my home of a television image from another part of the world seems instant, but is largely the result of a process in which radically different versions of time and space have played significant roles (the motion and position of the satellite, transmitting stations, microwave towers and so on). I won’t belabour this point other than to point out that the notion of instant recognition has played a significant role in the ways in which our culture has understood digital communications. This has tended to reduce if not eliminate the many different facets of the creative and technological process.

But let’s return to the more interesting and potentially creative idea that the subject is the message (mnetioned in an earlier post). As the sense-ratios alter, the sum-total of effects engenders a subject surrounded by and encapsulated within an electronic world, a subject who effectively becomes that world (and here the resonance with Jean Baudrillard is clear). This is not simply the movement from machine to human, it is the integration of machine and humans where neither becomes the victim of the other. As mediums we move meanings and messages around in a variety of creative ways (hence the link to speech) and as humans interacting with machines we are the medium within which this process and processing circulates. I repeat, this does not mean that we have become the machine, a concept that has inspired a great deal of criticism of technology in general, rather we end up sharing a common ground with our own creations, a mediated environment which we are explore everyday and try to make sense of the information that we are learning.

Interestingly, Derrick De Kerckhove, the Director of the McLuhan Centre at the University of Toronto who has been described as the successor to McLuhan wrote a book entitled, The Skin of Culture: Investigating the New Electronic Reality (Kogan Page, London: 1998). He said:

“With television and computers we have moved information processing from within our brains to screens in front of, rather than behind, our eyes. Video technologies relate not only to our brain, but to our whole nervous system and our senses, creating conditions for a new psychology. (De Kerckhove: 5)

To Kerckhove, human beings have become messages (and this is different from being mediums) with our brains emulating the processing logic and structural constraints of computers. Here we do become the machine. We no longer signify as an act of will. Agency is merely a function of messaging systems. Agency no longer recognizes its role as a medium and as a result we seek and are gratified by the instantaneous, the immediate, the unmediated. Now, the ramifications of this approach are broad and need extensive thought and clarification.

The important point here is that De Kerckhove has molded the human body into an extension of the computer, because we are already, to some degree, machines. Our nervous systems, which scientists barely understand and our senses which for neuroscientists remain one of the wonders of nature are suddenly characterized through the metaphors of screens, vision, technology and a new psychology. The inevitable result are mechanical metaphors that make it seem as if science, computer science and biotechnology will eventually solve the ambiguous conundrums of perception (e.g., in the virtual world we become what we see), knowledge and learning. To say that we are the machine is a far cry from understanding the hybrid processes that encourage machine-human interactions. De Kerckhove has transformed the terrain here much as McLuhan did, so that humans lose their autonomy and their ability to act upon the world, although his is a far more sohisticated examination than McLuhan's.

As I said, this is not an article about McLuhan and so I will not explore the report that he wrote any further or the vast literature that has grown up around his thinking. As you can no doubt tell, I am concerned with the rather mechanical view that our culture has of the human mind and am fascinated with the ease with which we have taken on McLuhan’s simplified versions of affect and effect. It is not so much the behavioural bias that concerns me (although it is important to be aware of the influence of behaviourism on the cultural analysis of technology) but the equations that are drawn among experience, images and technology.

These equations often reduce the creative engagement of humans with culture and technology, to the point where culture and technology become one, eliminating the possibility of contestation. In large measure, many of the complaints about digital technologies, the fears of being overwhelmed if not replaced are the result of not recognizing the potential to recreate the products of technological innovation. The best example of this is the way video games have evolved from rudimentary forms of storytelling to complex narratives driven by the increasing ease with which the games are mastered by players. The sophistication of the players has transformed the technology. But none of this would have been possible without the ability of the technology to grow and change in response to the rather unpredictable choices made by humans.

If we turn to the computer for a moment, the notion that it has the power to affect human cognition is rooted in debates and theories developed within the fields of cybernetics and artificial intelligence. The “…popular press began to call computers ‘electronic brains’ and their internal parts and functions were given anthropomorphic names (e.g., computer memory)… (Warren Sack, “Artificial Intelligence and Aesthetics pg. 3)

The notion that a computer has memory has taken root in such a powerful way that it seems impossible to talk about computers without reference to memory. So, an interesting circle has been formed or it might be a tautology. Computer memory becomes a standard which we use to judge memory in general, hence the fears about Deep Blue somehow replacing the human mind, even though its programming was created by humans! The problem is that there is a long tradition of human creativity in the development of technologies and this history is embedded in every aspect of our daily lives. Deep Blue is just one more extension of the process. The fact that we can use the computer to judge our own memories certainly doesn’t eliminate anything. It merely means that we now have a tool that we can use to examine what we actually mean by memory. In fact, recent neuroscientific research into memory suggests that we have profoundly underestimated our own minds let alone the digital ones that we are creating.

The very idea of a computer program is linked to the power to do. (Sack: 5) Again, there are certain debates that cannot be developed here, including the significant one between Daniel Dennett and John Searle, a debate explored by Stephen Pinker in his new book, How the Mind Works. Pinker is a supporter of cognitive psychology and also suggests that the brain operates like a computer. His argument is more subtle than that however, because he is quite worried about creating too great an equivalence between the brain and the mechanics of the computer. I bring this up because it is the cultural attraction of the metaphors which interests me. It is important to understand that computer programs are carefully constructed artificial languages that have great difficulty dealing with the unpredictable, with the tentative, the contingent or the irrational. Computer programs are codified according to a strict set of rules and I think that we can make the argument that common sense is not. I will briefly return to this discussion later on.

To be continued......

Sunday
Jun182006

The context for learning, education and the arts (2)

This Entry is in Five Parts. (One, Two, Three, Four, Five)

Let me begin by quoting the head of IBM, Lou Gerstner in reference to Deep Blue, the computer developed to play chess at the grandmaster level:

“Deep Blue is emblematic of a whole class of emerging computer systems that combine ultrafast processing with analytical software. Today we’re applying these systems to challenges far more vital than chess. They are used for example in simulation — replacing physical things with digital things, re-creating reality inside powerful computer systems? (“Think Leadership? Vol. 3, No. 1, 1998: 2)

Now, what is important here is not only the references to Deep Blue and very fast computer systems, but the assumption that the replacement of physical things with digital things re-creates reality inside computer systems and by extension in reality itself. This may well be true and may well be happening, but we need to examine the implications of the claim and locate this claim within a cultural, social and economic analysis. And we need to become quite clear about the meaning of the term simulation which is used most often to refer to an artificial environment that either replaces the real or in Jean Baudrillard’s words become the real. Simulation as I will use it refers to the creation of artifacts, their use and their integration as well as co-optation into an increasingly digital culture.

“And soon we’ll see this hyper-extended networked world made up of a trillion interconnected, intelligent devices — intersecting with data-mining capability. Pervasive Computing meets Deep Computing? (Gerstner: 3)

I will return to the implications of this quote through a variety of different routes. Historically, the advent of new technologies in the 20th century has generally been paralleled by claims of social effect and cultural transformation and these are synoptically represented by the continued influence of Marshall McLuhan on present thinking about technology and its effects. I will not examine McLuhan’s ideas in great detail, suffice to say that many of the assumptions guiding his cultural appropriation by a variety of writers, commentators and politicians do not stand up to scrutiny of a rigorous kind. For example, McLuhan’s famous statement that “The Medium is the Message? grew out of a report that he wrote in 1959-60 for the Office of Education, United States Department of Health, Education and Welfare. It was entitled, “Report on Project in Understanding New Media? In it McLuhan analyses media such as television using the tools of cognitive psychology, management theory and economics. For McLuhan, media include speech, writing, photography, radio, etc.. And he is puzzled by why the effects of these media have been overlooked for as he puts it, “…3500 years of the Western world? (McLuhan, 1960: 1)

McLuhan searches for an explanation and much of the research for the project is prescient and fascinating as well as a precursor to the publication of “Understanding Media? in 1964. When it comes to the famous aphorism about the medium and the message, McLuhan reveals a rather interesting foundation for much of his later research.

“Nothing could be more unrealistic than to suppose that the programming for such media could affect their power to re-pattern the sense-ratios of our beings. It is the ratio among our senses which is violently disturbed by media technology. And any upset in our sense-ratios alters the matrix of thought and concept and value. In what follows, I hope to show how this ratio is altered by various media and why, therefore, the medium is the message or the sum-total of effects. The so-called content of any medium is another medium? (McLuhan, 1960: 9)

It is clear from this statement that the medium is actually the subject, that it is human beings whose sense-ratios are altered by participating in the experiences made possible through the media. It is not the content of the communication, but the encounter between the medium and subjectivity that alters or disturbs how we then reflexively analyse our experience. Although the medium is the message is generally interpreted in formal terms and although it has been appropriated as a generalization used to explain the presence of media in every aspect of our lives, McLuhan is here playing with cognitive and psychological research as it was developed in the 1950’s. More importantly, at this stage, he is avoiding a binary approach to form/content relations. He is effectively introducing a third element into the discussion, namely, the human body.

Page 1 ... 3 4 5 6 7 ... 10 Next 5 Entries »