Search
Recent Entries
Twitter
Responses
  • Contact Me

    This form will allow you to send a secure email to the owner of this page. Your email address is not logged by this system, but will be attached to the message that is forwarded from this page.
  • Your Name *
  • Your Email *
  • Subject *
  • Message *

Entries in New Media (49)

Thursday
Dec202007

The Practice of Interdisciplinarity in Design and New Media

Keywords: Inclusive Design, New Media

This essay examines the history of a multi-disciplinary Centre for Design and New Media developed over a period of three years in Vancouver, Canada. I explore the challenges of developing research models that make it possible for a variety of investigators and practitioners in the areas of Design and New Media to link their work to that of engineers and computer scientists.

In 2000, the New Media Innovation Centre (NewMic) was started in Vancouver, Canada under the aegis and with the support of five post-secondary academic institutions, industry and the federal and provincial governments. Approximately, nineteen million dollars was invested at the outset mostly from industry and government. I was one of the leaders in the planning and development of NewMic, in large measure because I have a long history of involvement in teaching and researching, as well as producing new media. (The industry members included, Electronic Arts, IBM, Nortel Networks, Sierra Wireless, Telus and Xerox Parc.)

One of the foundational goals of NewMic was to bring engineers, computer scientists, social scientists, artists, designers and industry together, in order to create an interdisciplinary mix of expertise from a variety of areas. The premise was that this group would engage in innovative research to produce inclusive and new media designs of a variety of products, network tools and multimedia applications. The second premise was that the research would produce outcomes that could be implemented and commercialized in order to produce added value for all of the partners.

I spent a year at NewMic as a designer/artist in residence in 2002 and was also on its Board of Governors from 2000-2003 until it was closed down late in 2003. There are a number of important features to the history of this short-lived institution that are important markers of the challenges and obstacles facing any interdisciplinary dialogue that includes artists and designers working with engineers and computer scientists. Among the challenges are:


  • The tendency among engineers, designers and computer scientists to have an unproblematic relationship to knowledge and knowledge production
  • Lack of clarity as to the meaning, impact and social role of inclusive and new media design products;  
  • Profound misunderstanding of the relationship between inclusivity, user needs and technological innovation; 
  • Conflicting cultures and discourses;
  • An uninformed and generally superficial understanding of the differences between the cognitive sciences and ethnographic explorations of human-computer interaction; 
  • Focus on a false distinction between pure and applied research.

Underlying some of these challenges was an apprehension that without interdisciplinarity, it would be impossible to be innovative. The artists and designers from Emily Carr Institute who participated in NewMic and whose concerns were centred on community, creativity, outreach, inclusivity and the ethical implications and effects of new technologies, found themselves in a difficult and demanding position.

The Culture of Collaboration, Design and Interdisciplinarity

Diana Forsythe, in a superb book entitled, Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence says the following:

1. To knowledge engineers, knowledge is an either/or proposition: it seems either present or absent, right or wrong. Knowledge thus seems to be conceived of as an absolute. If you have it, you’re an expert; if you lack it, you’re a novice.
2. Knowledge engineers seem to conceive of reasoning as a matter of following formal rules. In contrast, social scientists—especially anthropologists—tend to think of it in terms of meaning and to note that the logic by which people reason may differ according to social and cultural context.
3. Knowledge engineers tend to assume that knowledge is conscious, that is, that experts can tell you what they know if only they will. They do not have systematic procedures for exploring tacit knowledge, not so they seem aware of the inevitably partial nature of the retrospective reporting conventionally used for knowledge elicitation. (Forsythe, 52)

These three points are central to understanding the culture of collaboration that needs to be built when researchers from diverse disciplines in the arts and engineering and computer sciences decide to work cooperatively. One of the challenges in any collaboration is developing a model of how different cultures and discourses can develop a best practices approach to understanding each other. It is not just an issue of people speaking and thinking differently, or having different research paradigms (although those two issues must be dealt with if any collaboration in this area is to be successful), it is also crucial to explore expectations, needs and what each discipline means by outcomes.

For example, the area of Inclusive Design is about ensuring that environments, products, services and interfaces work for people of all ages and abilities. The differences and similarities between applied and pure research need to be kept in mind on an almost continual basis. (Pure research is speculative, long-term and more oriented to speculative thinking as an end in itself.) In some instances, an applied approach may not capture all the nuances of a product’s potential design and use. An applied strategy may not delve deeply enough into the subtle relationship that people have with the environments they inhabit and the objects they utilize.

The supposed disparity between pure and applied research strategies was one of the areas of greatest conflict at NewMic. Industry members in particular wanted to move from research to end product as quickly as possible. And while this may be a necessity in the private sector, it takes more time for researchers from post-secondary institutions and independent labs to both understand the direction they want to pursue and to produce results. This may well be a weakness with the latter group, and it is the case that a good deal of the research done by universities produces no measurable outcomes, but this does not belie the fact that some of the most important research in the 20th century has come from the post-secondary sector.

The distinctions between applied and pure research are in general, false, since there are many examples of pure research resulting in practical outcomes and applications. One of the best examples of this was the discovery in 1946 that “certain nuclei act as tiny magnets. Scientists then could scarcely have imagined the practical applications which would lead to today's multi-billion dollar industry in magnetic resonance medical imaging (MRI), which doctors use to scan the tissues and bones of patients in diagnosing cancerous turnouts or hair-line fractures. But the original discovery only provided the opportunity for the applications. To realize these required a great deal of additional sophisticated engineering, applied science and commercial development.” (Harvey Brooks, Harvard University 2004)

An added complication was NewMic’s inclusion of researchers and practitioners with backgrounds in art and design. Artistic research is very much defined by doing, but it is also circumscribed by the process of playing as well as the creative ability to capture and realize the importance of chance and serendipity. The outcome of research in the arts is often the work of art itself. Design, on the other hand defines itself through its close relationship with clients and looks to materiality (even in a digital world) for confirmation and validation.

There are of course many examples of successful collaborations, which in some instances have produced spectacular pay-offs like the inclusion of artists-in-residence at Xerox’s Palo Alto Research Center. (Harris, 1999) In the Palo Alto case the synergies between artists, designers and engineers produced some wonderful results and many other centres have tried to duplicate their experience. In the private sector, the design company IDEO is an excellent example of how to build a culture of connection and interaction between different disciplines. (Kelley, 2001)

The NewMic collaboration began with two major reference points, Palo Alto and MIT’s Media Lab. Again, this was not unusual. Other projects in Montreal, Melbourne, Dublin and Germany referred to and attempted to reflect the successes of MIT and Xerox. In the beginning the mandate of NewMic was described as follows:

To accomplish its mission, NewMIC was focused on the following objectives:


  • Attracting and retaining outstanding faculty and graduate and undergraduate students in new media research and in art and design areas.

  • Building excellence in new media innovation.

  • Developing better industry-university-institute collaboration for the purposes of technology transfer.

  • Encouraging the transfer and commercialization of technology through incubation support.

  • Attracting more venture capital to the new media industry. (March 2001)

 

The design component was incorporated into the vision by default under the rubric of New Media. This proved to be an error because so much of New Media is driven by interface design, product design and inclusive design as well as 'old media' goals. Ultimately, the goal was to frame the experience of users of New Media within a product-oriented set of research pursuits. Ironically, so many of the lessons that designers have learned over the last two decades, the importance of detailed ethnographic inquiry, the need to think about the relationship between product and user, the flexibility that is necessary to make interfaces work for many diverse constituents, the fact that design is really about people and this knowledge, that inclusivity cannot be attained without understanding how people live, was not directly applied to the research in New Media at NewMic.

The emphasis on innovation, technology transfer and commercialization, although necessary, cannot be accomplished in a context that is entirely oriented towards applied research with short timelines. This is a conundrum because it is completely understandable that industry would want to see some results from their investment, but the essence of collaboration is that it takes time. In fact, one of the crucial lessons of the NewMic experience is that developing designs that are environmentally sensitive and inclusive requires not only that people from different disciplines participate, but that time be given over to the development of shared communities of interest. Interdisciplinarity is as much about a coming together as it is about recognizing differences.

Diana Forsythe in her own words:

"Anthropologists have been using ethnographic methods since the 1970s to support the design and evaluation of software. While early use of such skills in the design world was viewed as experimental, at least by computer scientists and engineers, ethnography has now become established as a useful skill in technology design. Not only are corporations and research laboratories employing anthropologists to take part in the development process, but growing numbers of non-anthropologists are attempting to borrow ethnographic techniques. The results of this appropriation have brought out into the open a kind of paradox: while ethnography looks and sounds straightforward, this is not really the case. The work of untrained ethnographers tends to overlook things that anthropologists see as important parts of the research process. The consistency of this pattern suggests that some aspects of ethnographic fieldwork are invisible to the untrained eye. In short, ethnography would appear to constitute an example of invisible work."

 

Sunday
Feb042007

Second Life (2)

I posted an earlier piece on Second Life and talked about cyberspace and the metaphoric power of alternate "realities" within the context of communications networks. Here is what Henry Jenkins, Professor of Comparative Media Studies at MIT said on his Blog: "Some have dismissed SL as a costume party -- I see it more as carnival in the medieval sense of the term -- as a time and place within which normal rules of interactions are suspended, roles can be swapped or transformed, hierarchies can be reordered, and we can step out of normal reality into a "magic circle" or "green world" which can be highly generative for the imagination. The difference is that in the old days, carnival was something that existed for a very short period of time and people planned for it all year. Now, in the era of SL, carnival exists all the day and people have to decide how much time they want to spend there."

An example, I was 'skating' in SL and another skater approached me and asked why I was skating in 'flippers.' I responded somewhat incredulously that it didn't matter to me and he or she replied that I was breaking the rules of SL.

I would argue that the carnivalesque quality of SL is still surrounded by 'acceptable' notions and norms of reality or first life. In fact, the sense one gets from SL is often rather banal as the physics of place, architecture and design are all set up to reflect conventional expectations of what should or must happen if people walk, talk or simply look at objects in the multiverse. A true carnival would actually push the boundaries of acceptable behaviour on a continual basis. From time to time, flying penises for example, that happens in SL, but for the most part the challenge seems to be to have a reasonable experience that fits into preexisting conceptions of reality and its limitations as well as potential.

There are clouds that you can enter and your avatar can fly and there are designs that defy convention, but for the most part, SL tries to imitate the 3D world rather than reinterpreting its premises. This may too much to ask. Clay Shirky has written an excellent critique that focuses on many of the grand assumptions about role and use in SL.

In contrast, Beth Coleman talks about the potential of SL and makes some important points about user-generated content. Yes, it is true that the content of SL has been created by users but the limitations of choice, style and design are quite high. Perhaps, there is a middle ground here between SL's aesthetic and orientation and the overall potential of new environments created by interested people and communities.

My sense is that more is happening in the Machinima world where game engines are being used to create some very interesting films. Check out this one. The difference between Machinima and SL is that the former requires some real development of story lines and technology use. Notwithstanding all of this, I am still interested in exploring more of SL. After all, we are in the early phases of multiverse creation.

There is an interview with the chairman of Linden Labs, owner of Second Life at the Reuters Second Life Newsroom.

Sunday
Jul092006

Jaron Lanier and The Hazards of Online Collectivism

Jaron Lanier, who is famous for having coined the term virtual reality and the concepts that go with it, wrote an essay in late May that has provoked discussion all over the internet. Here is a quote from the piece. The complete article can be found at the EDGE website. The essay is entitled, "Digital Maoism: The Hazards of the New Online Collectivism."

The problem I am concerned with here is not the Wikipedia in itself. It's been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. At the very least it's a success at revealing what the online people with the most determination and time on their hands are thinking, and that's actually interesting information.

No, the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.

The EDGE also has 28 pages of responses to what Lanier says.

The essence of his argument is that collaborative work on the net has become increasingly hive-like. This leads to a "group mentality" approach to ideas and the notion that the "collective is all-wise." The result is a tyranny of the majority with a simultaneous loss of value both to intellectual depth and the way democracies operate. He is particularly critical of wikipedia— the online encyclopedia which is being built by individuals from all over the world in much the same manner as open source software. I have commented on wikipedia before. Some of Lanier's fears are well-founded, but for the most part, his comments don't explain or clarify why networked forms of knowledge contruction are any more hive-based than most intellectual projects. Generally, irrespective of the type of knowledge or information produced, there are communities of interest that define and reinforce the concepts, categories and arguments that they support. This has been discussed in great depth by people like Bruno Latour and Elias Canetti wrote an important "Crowds and Power," in 1962 on the phenomenon of mass hysteria and the tendency to a kind of viral effect when large groups of people operate in tandem.

Lanier's points need discussion, not the least because networked forms of interaction on the scale that we are seeing at the moment are still very new. That said, there is not much to his analysis of conventional media. He is too skeptical of Popular Culture and gives too much weight to the role of sites like Wikipedia. His concern, that the aggregative role played by the many sites that are about sites is overstated. He is worried that these meta-sites will play an overly powerful role as arbiters of taste and choice. I think in this, he underestimates the intelligence of Internet users. Nonetheless, an important article to read.

Saturday
Jun102006

Geographies of Dissent (2)

There is another term that I would like to introduce into this discussion and that is, counter-publics. Daniel Brouwer in a recent issue of Critical Studies in Media Communications uses the term to describe the impact of two “zines"? on public discussion of HIV-AIDS. The term resonates for me because it has the potential to bring micro and macro into a relationship that could best be defined as a continuum and suggests that one needs to identify how various publics can contain within themselves a continuing and often conflicted and sometimes very varied set of analysis and discourses about central issues of concern to everyone. It was the availability of copy machines beginning in 1974 that really made ‘zines’ possible. There had been earlier versions, most of which were copied by hand or by using typewriters, but copy machines made it easy to produce 200 or 300 copies of a zine at very low cost. In the process, a mico-community of readers was established for an infinite number of zines. In fact, the first zine convention in Chicago in the 1970’s attracted thousands of participants. The zines that Brouwer discusses that were small to begin with grew over time to five and ten thousand subscribers. This is viral publishing at its best, but it also suggests something about how various common sets of interests manifest themselves and how communities form in response.

“One estimate reckons that these "Xeroxed, hand-written, desktop-published, sometimes printed, and even electronic" documents (as the 1995 zine convention in Hawaii puts it) have produced some 20,000 titles in the past couple of decades. And this "cottage" industry is thought to be still growing at twenty percent per year. Consequently, as never before, scattered groups of people unknown to one another, rarely living in contiguous areas, and sometimes never seeing another member, have nonetheless been able to form robust social worlds? John Seely Brown and Paul Duguid in The Social Life of Documents. Clearly, zines represent counter-publics that are political and are inheritors of 19th century forms of poster communications and the use of public speakers to bring countervailing ideas to large groups. Another way of thinking about this area is to look at the language used by many zines. Generally, their mode of address is direct. The language tends to be both declarative and personal. The result is that the zines feel like they are part of the community they are talking to and become an open ‘place’ of exchange with unpredictable results. I will return to this part of the discussion in a moment, but it should be obvious that zines were the precursors to Blogs.

As I said, the overall aggregation of various forms of protest using a variety of different media in a large number of varied contexts generates outcomes that are not necessarily the product of any centralized planning. This means that it is also difficult to gage the results. Did the active use of cell phones during the demonstrations in Seattle against the WTO contribute to greater levels of organization and preparedness on the part of the protestors and therefore on the message they were communicating? Mobile technologies were also used to “broadcast? back to a central source that then sent out news releases to counter the mainstream media and their depiction of the protests and protestors. This proved to be minimally effective in the broader social sense, but very effective when it came to maintaining and sustaining the communities that had developed in opposition to the WTO and globalization. Inadvertently, the mainstream media allowed the images of protest to appear in any form because they were hungry for information and needed to make sense of what was going on. As with many other protests in public spaces, it is not always possible for the mainstream media to control what they depict. Ultimately, the most important outcome of the demonstrations was symbolic, which in our society added real value to the message of the protestors.

To be continued...

 

Saturday
Jun032006

Some comments on How Images Think

Professor Pramod Nayar of the Department of English, University of Hyderabad comments on "How Images Think." This is a small selection of a longer review that appeared in the Journal of the American Society for Information Science and Technology

How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that image refers to the complex set of interactions that constitute everyday life in image-worlds (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication - computers, for instance - suggests that images are interfaces, structuring interaction, people, and the environment they share.

New technologies are not simply extensions of human abilities and needs - they literally enlarge cultural and social preconceptions of the relationship between body and mind.

The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or live and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world - they are the world, and the distinctions between natural and non-natural have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events - for all perceptive, cognitive, and interpretive purposes, the image is the event for us.

The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization - viewers generate what they see in the images. This involves the historical moment of viewing - such as viewing images of the WTC bombings - and the act of re-imagining. As Burnett puts it, the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image (p. 26).

MT_June06.jpg

Page 1 ... 2 3 4 5 6 ... 10 Next 5 Entries »