Search
Recent Entries
Twitter
Responses
  • Contact Me

    This form will allow you to send a secure email to the owner of this page. Your email address is not logged by this system, but will be attached to the message that is forwarded from this page.
  • Your Name *
  • Your Email *
  • Subject *
  • Message *

Entries in Social Networks (34)

Thursday
Aug032006

Gehry and MIT

mt-aug3.jpg

Frank Gehry's building at MIT is a wonder to behold. The building is home to the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Laboratory for Information and Decision Systems (LIDS) and the Department of Linguistics and Philosophy. Its striking design - featuring tilting towers, many-angled walls and whimsical shapes - challenges much of the conventional wisdom of laboratory and campus building.

mt-august06.jpg

Tuesday
Jun202006

The context for learning, education and the arts (4)

(This entry is in five parts) One, Two, Three, Four, Five)

So why explore the intersections of human thought and computer programming? My tentative answer would be that we have not understood the breadth and depth of the relationships that we develop with machines. Human culture is defined by its on-going struggle with tools and implements, continuously finding ways of improving both the functionality of technology and its potential integration into everyday life. Computer programming may well be one of the most sophisticated artificial languages which our culture has ever constructed, but this does not mean that we have lost control of the process.

The problem is that we don’t recognize the symbiosis, the synergistic entanglement of subjectivity and machine, or if we do, it is through the lens of otherness as if our culture is neither the progenitor nor really in control of its own inventions. These questions have been explored in great detail by Bruno Latour and I would reference his articles in “Common Knowledge as well as his most recent book entitled, Aramis or The Love of Technology. There are further and even more complex entanglements here related to our views of science and invention, creativity and nature. Suffice to say, that there could be no greater simplification than the one which claims that we have become the machine or that machines are extensions of our bodies and our identities. The struggle to understand identity involves all aspects of experience and it is precisely the complexity of that struggle, its very unpredictability, which keeps our culture producing ever more complex technologies and which keeps the questions about technology so much in the forefront of everyday life.

It is useful to know that the within the field of artificial intelligence (AI) there are divisions between researchers who are trying to build large databases of “common sense in an effort to create programming that will anticipate human action, behaviour and responses to a variety of complex situations and researchers who are known as computational phenomenologists . “Pivotal to the computational phenomenologists position has been their understanding of common sense as a negotiated process as opposed to a huge database of facts, rules or schemata."(Warren Sack)

So even within the field of AI itself there is little agreement as to how the mind works, or how body and mind are parts of a more complex, holistic process which may not have a finite systemic character. The desire however to create the technology for artificial intelligence is rooted in generalized views of human intelligence, generalizations which don’t pivot on culturally specific questions of ethnicity, class or gender. The assumption that the creation of technology is not constrained by the boundaries of cultural difference is a major problem since it proposes a neutral register for the user as well. I must stress that these problems are endemic to discussions of the history of technology. Part of the reason for this is that machines are viewed not so much as mediators, but as tools — not as integral parts of human experience, but as artifacts whose status as objects enframes their potential use.

Computers, though, play a role in their use. They are not simply instruments because so much has in fact been done to them in order to provide them with the power to act their role. What we more likely have here are hybrids, a term coined by Bruno Latour to describe the complexity of interaction and use that is generated by machine-human relationships.

Another way of understanding this debate is to dig even more deeply into our assumptions about computer programming. I will briefly deal with this area before moving on to an explanation of why these arguments are crucial for educators as well as artists and for the creators and users of technology.

Generally, we think of computer programs as codes with rules that produce certain results and practices. Thus, the word processing program I am presently using has been built to ensure that I can use it to create sentences and paragraphs, to in other words write. The program has a wide array of functions that can recognize errors of spelling and grammar, create lists and draw objects. But, we do have to ask ourselves whether the program was designed to have an impact on my writing style. Programmers would claim that they have simply coded in as many of the characteristics of grammar as they could without overwhelming the functioning of the program itself. They would also claim that the program does not set limits to the infinite number of sentences that can be created by writers.

However, the situation is more complex than this and is also subject to many more constraints than initially seems to be the case. For example, we have to draw distinctions between programs and what Brian Cantwell Smith describes as “process or computation to which that program gives rise upon being executed and [the] often external domain or subject matter that the computation is about. (Smith, On the Origin of Objects, Cambridge: MIT Press, 1998: 33) The key point here is that program and process are not static, but are dynamic, if not contingent. Thus we can describe the word processor as part of a continuum leading from computation to language to expression to communication to interpretation. Even this does not address the complexity of relations among all of these processes and the various levels of meaning within each.

To be continued........

 

Friday
Jun162006

Remix 06: Blending, Bending and Befriending Content

Innovative Content Development in New Media has some of the following characteristics (This is by no means a comprehensive list.):
_______________
Imaginative storytelling (Breaking the rules and building new ones)
_______________
Not derivative (but can be a copy—mush — experimental cinema and music as models)
_______________
Aware of aesthetics, form and feel (Use OF Technology — Not Used by Technology)
_______________
Creating new knowledge and information (Play in every sense of the word.)
_______________
Aware of collage, montage and other techniques of bricolage (Stories can make the impossible real — photo-realism is a dead end)
_______________
Talent (Learning and Education and Research)
_______________
Decentralized modes of information gathering, exchange and distribution (Open Source)
_______________
Interactivity (Video games create the illusion of interactivity — interactive game play should be about a complete transformation of the game by the player — interactivity becomes creativity)
_______________
Bring body movement into the video game storytelling equation (Hands are not enough — Wii)
_______________
Link popular culture, games, books, magazines, fans, television and the web into content development (Specialized studios need cultural analysts and ethnographers as much as they need creators)
_______________
Work with audiences not against them (Fan movements, fansites, fan literature)
_______________
Assume that trends will shift as quickly as they are recognized — old style marketing will not work (Time is compressed but that does not mean that clip stories will last — marketing becomes discovering stories as well as creating them)
_______________
Non-linearity, complexity and chaos are at the center of digital content creation
_______________
Simulations are only as effective as the stories that underly them — Algorithms are culture
_______________
Telepresence and visualization need haptics and vice versa (Dreams are the Royal Road into Storytelling)
_______________
Narrowcast not broadcast (P2P will become C2C)

Sunday
Jun112006

Geographies of Dissent (Final)

Another vantage point on this process is to think of various communities, which share common goals becoming nodes on a network that over time ends up creating and often sustaining a super-network of people pursuing political change. Their overall impact remains rather difficult to understand and assess, not because these nodes are in any way ineffective, but because they cannot be evaluated in isolation from each other.

This notion of networks may allow us to think about communities in a different way. It is, as we know possible at one and the same time for the impulses that guide communities to be progressive and very conservative. There is nothing inherently positive with respect to politics within communities, which are based on shared points of view. But, if the process is more important often, than the content, then this raises other issues. The intersection of connectivity and ideas leads to unpredictable outcomes. Take fan clubs for example. They generally centre on particular stars, films or television shows. They are a form of popular participation in mainstream media and a way of affecting not so much the content of what is produced (although that is happening more and more, Star Trek has continued as a series on the Net) but the relationship of private and public discourse about media products and their impact. Over time, through accretion and sheer persistence, fan clubs have become very influential. They are nodes on a network that connects through shared interests, one of which is to mold the media into a reflection of their concerns.

More often than not this network of connections is presumed to be of greater importance than the content of what is exchanged. This is classically what Baudrillard meant by the world becoming virtual and McLuhan, when he claimed that the medium was the message. Except, that they are both wrong.

The process of exchange, that is the many different ways in which people on shared networks work and play together cannot be analyzed from a behavioral perspective. Take FLICKR for example. There is nothing very complicated about this software. It was developed by two Vancouverites and then bought for 30 million dollars by Yahoo. The software is simple. It allows users to annotate photographs that they have posted to the web site. The annotations become an index and that index is searchable by everyone. The reason Yahoo paid so much is that over 80 million photographs had been uploaded and there were hundreds of communities of interest exchanging images with each other. Most of this is completely decentralized. The web site just hosts the process of community building.

The same elements attracted the News Corporation to MySpace.com and Rupert Murdoch paid over three hundred million dollars for that site or should I say community. Communities become currencies because there are so few ways to organize and understand all of the diversity that is being created within the context of modern-day networks. This is not because the medium is the message; rather, it is because the media are inherently social — social media. And in being social, they reshape modes of human organization and most importantly, the many different ways in which collectivities can form and reform.

(Please note: The last three entries, Geographies of Dissent were presented in a different format at York University, at a conference of the same name.)

 

Saturday
Jun102006

Geographies of Dissent (2)

There is another term that I would like to introduce into this discussion and that is, counter-publics. Daniel Brouwer in a recent issue of Critical Studies in Media Communications uses the term to describe the impact of two “zines"? on public discussion of HIV-AIDS. The term resonates for me because it has the potential to bring micro and macro into a relationship that could best be defined as a continuum and suggests that one needs to identify how various publics can contain within themselves a continuing and often conflicted and sometimes very varied set of analysis and discourses about central issues of concern to everyone. It was the availability of copy machines beginning in 1974 that really made ‘zines’ possible. There had been earlier versions, most of which were copied by hand or by using typewriters, but copy machines made it easy to produce 200 or 300 copies of a zine at very low cost. In the process, a mico-community of readers was established for an infinite number of zines. In fact, the first zine convention in Chicago in the 1970’s attracted thousands of participants. The zines that Brouwer discusses that were small to begin with grew over time to five and ten thousand subscribers. This is viral publishing at its best, but it also suggests something about how various common sets of interests manifest themselves and how communities form in response.

“One estimate reckons that these "Xeroxed, hand-written, desktop-published, sometimes printed, and even electronic" documents (as the 1995 zine convention in Hawaii puts it) have produced some 20,000 titles in the past couple of decades. And this "cottage" industry is thought to be still growing at twenty percent per year. Consequently, as never before, scattered groups of people unknown to one another, rarely living in contiguous areas, and sometimes never seeing another member, have nonetheless been able to form robust social worlds? John Seely Brown and Paul Duguid in The Social Life of Documents. Clearly, zines represent counter-publics that are political and are inheritors of 19th century forms of poster communications and the use of public speakers to bring countervailing ideas to large groups. Another way of thinking about this area is to look at the language used by many zines. Generally, their mode of address is direct. The language tends to be both declarative and personal. The result is that the zines feel like they are part of the community they are talking to and become an open ‘place’ of exchange with unpredictable results. I will return to this part of the discussion in a moment, but it should be obvious that zines were the precursors to Blogs.

As I said, the overall aggregation of various forms of protest using a variety of different media in a large number of varied contexts generates outcomes that are not necessarily the product of any centralized planning. This means that it is also difficult to gage the results. Did the active use of cell phones during the demonstrations in Seattle against the WTO contribute to greater levels of organization and preparedness on the part of the protestors and therefore on the message they were communicating? Mobile technologies were also used to “broadcast? back to a central source that then sent out news releases to counter the mainstream media and their depiction of the protests and protestors. This proved to be minimally effective in the broader social sense, but very effective when it came to maintaining and sustaining the communities that had developed in opposition to the WTO and globalization. Inadvertently, the mainstream media allowed the images of protest to appear in any form because they were hungry for information and needed to make sense of what was going on. As with many other protests in public spaces, it is not always possible for the mainstream media to control what they depict. Ultimately, the most important outcome of the demonstrations was symbolic, which in our society added real value to the message of the protestors.

To be continued...