The Extension of Man – Hacking the Body

What is a prosthetic?  We have seen artificial limbs for amputees and the Cheetah (R) foot for track and field athletes. You can have your fingertips replaced if you accidentally slice them off while making dinner. Cancer survivors can have their breasts reconstructed if they’ve lost them in treatment.

But what if you wanted a super-human digit? Or what if you want to shave off minutes from your current marathon time?  Where is that fine line between restoring and enhancing?

As part of the TinkTank labs workshop series I helped lead participants through a 4-hour workshop that highlighted how artists and scientists are currently hacking the body and discussed the socio-technical, cultural and ethical issues we will be facing as these hacks become widely possible, prevalent and undetectable.

Hacking the Body Workshop

As part of the workshop we engaged participants in building physical prototypes of their own prosthetic designs using construction materials, microcontrollers, electronics, and Arduino code. We wanted participants to focus on the design and critical theory behind the changing landscape of prosthetics and human augmentation (rather than messing around with and debugging code). To this end I develop several flexible and reusable snippets of Arduino code that participants could use and combine to get their prosthetics working with minimal coding knowledge (the code can be found here).

Overall, the workshop was a great success, with participants trying out and refining designs, and talking and thinking critically about the future of prosthetics, human augmentation, and how these are changing how we connect and relate to the world around us.

Click here to see some of the workshop slides

RPA – Developing a Framework for Tangible and Embodied Interactions

RPA – Developing a Framework for Tangible and Embodied Interactions

Rock, Paper, Awesome!

Rock, Paper, Awesome (RPA) was Encore Lab’s initial foray into developing the means for tangible and embodied interactions that would connect to our S3 technology framework. The goal for RPA was simple; individual labs could create their own unique tangible or embodied interactions through which they played rock, paper, scissors with other labs that were physically distributed around the world.

The Theory

We chose rock, paper, and scissors as our test-bed because it provided us with not only a well-defined set of semantics (i.e., win, lose, draw, player ready, player gone), but also a very loose coupling in how we enacted those semantics. For instance how a player chose “rock” in one space could be entirely different from how a lab chose it in another (e.g., standing in a particular spot in the room, versus pushing a button). This allowed us to think deeply about what it meant to convey the same message through various tangible and embodied interactions, and to begin building an understanding of how these different interactions affected the meaning making of the participants. In essence we built a “reverse tower of babel” where multiple languages could all be interpreted through S3, allowing recipients at both ends to effectively communicate through their own designs.

Screen Shot 2013-12-02 at 10.14.47 PM

In this way, RPA is more than just a game of rock, paper, scissors – it is an avenue for us to begin investigating novel ways for users to interact with the world, and for connecting these investigations within a broader knowledge community. We aim to not only connect these communities, but also to add a layer of user-contributed design to their interactions, where community members engage in creative fabrication and exchange of tangible, interactive media that reflect their ideas, workflow or presence, bridging the distances and connecting the community.

Three critical questions guided our development of RPA and this component of S3 in general:

  • How can we bring distributed communities together through tangible and embodied interactions?
  • What are the possible roles for tangible and physical computing, and ambient or interactive media that are deeply connected to the semantics, workflow, physical presence, ideas, activities, and interests of the distributed communities?
  • How does the temporality of the interactions (synchronous versus asynchronous) determine the selection of appropriate kinds of interactions and representations?

We are currently sending out kits, first versions of the code, and design documents to labs at the Learning Technologies Group at the University of Chicago, and Intermedia at the University of Oslo. We are excited to see how they will develop and contribute new interactive designs that represent their own representations of space and meaning within the game.

The Technology

The physical interactions and ambient feedback is handled by an Arduino microcontroller. The Arduino allows users to develop a wide array of inputs (e.g. proximity, light, and sound sensors, buttons and levers), and outputs (e.g. sound, light, movement). Using the S3 framework, RPA facilitates different game “events” (e.g., joining the game, choosing Rock) by sending messages over an XMPP chatroom (conference). We originally attempted to implement these messages over the XMPP server only using the Arduino  – however, given the relatively limited amount of RAM on the Arduino board (2KB) this turned out to be overly restrictive and we started looking at other solutions.

As a solution to this issue, we made a simplified set of event messages (i.e., single text characters) that were sent over the Arduino’s serial port to a connected computer. For testing purposes we used a laptop. However, in permanent installations, we envision RPA having a more compact and flexible setup. In order to achieve this, we connected the Arduino board to a Raspberry Pi. The benefits of the Raspberry Pi is that it is small and cheap, allowing us to dedicate a Pi for each game installation, and to have the “brains” of RPA be as unobtrusive as possible.

In order to connect the various RPA installations we use node.js as an intermediary between the XMPP chatroom and RaspberryPI. Messages that are posted to the XMPP chatroom are picked up by the node.js server and sent over serial port to the Arduino, which then executes the user-designed action, such as turning on a light or playing a chime. Respectively, any event trigger on the Arduino (e.g. a button is pressed), is sent over the serial port to node.js and translated into a XMPP message.

Sample Arduino code for RPA and the node.js setup code can all be freely downloaded, tinkered with and customized from github.

The Run

We set up two “stations” at OISE, one on the third floor and one on the 11th floor. Players challenged each other to a game of rock, paper, scissors (see the video below).

Each location had different tangible, audible, and visual inputs and outputs providing players unique multi-modal experiences that conveyed the same message. At the third floor location, a “servo motor” swung a dial to let the player know a challenger was waiting to play. At the eleventh floor location, an LED flashed to convey the challenge. We have tested other designs (not shown here) that used proximity sensors to detect where players were within a room, using their location to trigger an event (such as choosing rock). In another instance, a light sensor conveyed one player’s availability to other players (in remote locations) when the lights in the original player’s room were on.

Going Live! RPA at TEI 2013

We submitted RPA to TEI 2013′s student design challenge. The conference was held in Barcelona Spain and provided an ideal opportunity for us to try out RPA (and S3) in a live setting with users who had no experience with it. We had stations running at the site and at labs site running in Toronto allowing us to observe a wide range of interactions and gain feedback from participants. We also added a new layer to RPA which connected a real-time visualization of win/lose/draw results to the game – although this visualization duplicated some of the functionality of the tangible RPA elements it did represent a significant step in merging the tangible elements of S3 with a key element of the existing architecture.

 

The importance of narrative in design for children

The importance of narrative in design for children

Last week as part of the Interaction Design and Children Conference (IDC) in NYC Rebecca Cober and I had a chance to take part in a workshop at the New York Hall of Science (NYSCI) entitled: Narrative Contexts as a Design Element. The workshop was organized by Peggy Monahan, and Dorothy Bennett, with special guests Jessie Hopkins and David Glauber from Sesame Street Workshop (JOY!). The goal of the workshop was to look at some of the existing exhibits at NYSCI and try to improve them by adding a rich layer of narrative through the addition of simple low-fi prototypes (e.g., using cardboard, felt, string, or just pen and paper). What was really amazing about the experience was that our designs were for live exhibits. Within minutes we got to see how the narrative elements we added, using felt and cardboard, affected the experience of children and their parents in the museum!

We started by observing children interacting with exhibits in the light and optics area of NYSCI. The light and optics area was chosen by the workshop organizers because most of the displays had little or no narrative elements, making them like “blank canvases” for the workshop participants.  Based on what we had just learned concerning narrative elements (see box below), we discussed what could be altered in each display to make it i) more engaging and ii) more likely that children (and their parents) would think and talk about the underlying scientific principles of each display. Once we had a good sense of which display we wanted to tackle we broke into groups and sat down to the hard work of actually constructing the narrative and building the prototypes.

In thinking about the narrative Jessie and David has us focus on four “narrative ingredients”:

1. The Mood

  • It’s achieved through the visual contexts of the interactions, and through sound, atmosphere, music and other sensory cues.
  • It’s a quick and effective means for connecting with the participants on an emotional level

2. The Protagonist

  • Tell the user who they are and why it matters that they are there. Why is their participation in the story critical for it’s outcome?

3. The Relationships

  • They tell us how and why we matter to others, and should involve at least two people (these people can be both real and/or made up people… or animals, robots etc.)

4. Humor

  • Jokes are always good (especially with children), and especially slapstick over clever jokes (it’s about the audience not you!)
    • A critical question that should always be asked is where are the bananas and underpants!

The project we selected to work on was a large (approx. 2 meter by 2 meter) “light wheel” where a single LED bar flashed, sped up or slowed its rotation, and brightened or dimmed depending on the position of three dials on a control board: Frequency, RPM, and Voltage. During the initial observation we noticed that children (and parents alike) tended to walk up to the exhibit, turn the knobs (apparently) randomly for a few seconds and walk away. There was little to no discussion among participants, and their the “experimentation” lacked any sort of methodology to indicate the children were thinking deeply about the concepts.

Exhibit's dials without scaffolds
Exhibit’s dials without scaffolds

While observing the exhibit, we looked at the patterns the LED bar made while spinning and asked ourselves “If this could be something other than a boring box, what could it be?” We wanted something that would not only get the students to engage with the exhibit, but to do so critically and required them to think about how each of the dials affected the shape made by the flashing LED bar. After several brief discussions Rebecca and I realized that at certain points in during the bar’s rotation the lights looked exactly like a cat’s whiskers! We thought that by making the exhibit a challenge (making the cat’s whiskers align with its face), we could get the kids to more critically focus on how each dial affected the pattern and helped them to achieve the goal.

Whiskers Full Shot
Full shot of Friskers in action

Once we had our concept we set about making the design a reality – We had a little over an hour; so using felt, cardboard, string and glue we began to transform the black screen into a black cat! Thinking about the need to build a narrative we wanted to give participants a reason to play with our exhibit, so we gave the cat the name “Friskers” and built a story around the participants needing to help Friskers find his whiskers (by adjusting the knobs) and Rebecca drew up some large signs to explain this.

Rebecca's Narrative for Friskers
Rebecca’s Narrative for Friskers

With the exhibit up and running we sat down to observe children and their parents with the exhibit. We noticed that immediately children were drawn to the exhibit – running over to it to see what it was excited by the large cat– however they didn’t spend a lot of time really trying to “solve” exhibit, they turned the knobs but didn’t seem to focus on what each meant (and neither did the parents). We wanted to figure out why this was, and we realized that the notions of “Frequency, RPM, and Voltage” were simply too abstract for the children to draw connections to in a short period of time (and even many of the parents). In response we created three scaffolding signs to help children make connections between each term and what they did in terms of the lights (image below).

Once we did that the whole experience changed! Children started spending far more time (often minutes) on the exhibit, and actually discussing it with their friends (and parents)! The scaffolds helped the children better experiment and inquire about the science going on with the exhibit, and also helped parents make connections to each of the terms to start to talk with their kids about it more in-depth.

Friskers dials with instructions
Friskers dials with instructions

The end result was a massive shift in the engagement and discussion around the exhibit, to the point where the NYSCI coordinators wanted to keep it up after our trial! We “demoed” it to the rest of the IDC participants later in the week and one attendee spent over 40 minutes playing with Friskers and talking to others about it. Overall it really drove home to us how critical it is to think about the narrative of the educational interventions that you design, the role of carefully designed scaffolds, and the need to iterate based on watching your designs “in the wild”.

PLACE.Web – Orchestrating Smart Classroom Knowledge Communities

PLACE.Web – Orchestrating Smart Classroom Knowledge Communities

PLACE.web (Physics Learning Across Contexts and Environments) is a 13-week high school physics curriculum in which students capture examples of physics in the world around them (through pictures, videos, or open narratives), which they then explain, tag, and upload to a shared social space. Within this knowledge community, peers are free to respond, debate, and vote on the ideas presented within the examples towards gaining consensus about the phenomena being shown, empowering students to drive their own learning and sense making. We also developed a visualization of student work that represented student ideas as a complex interconnected web of social and semantic relations, allowing students to filter the information to match their own interests and learning needs, and a teacher portal for authoring tasks (such as multiple choice homework) and reviewing and assessing individual student work. Driven by the KCI Model the goal of PLACE.Web was to create an environment in which the class’ collective knowledge base was ubiquitously accessible – allowing students to engage with the ideas of their peers spontaneously and across multiple contexts (at home, on the street, in class, in a smart classroom). The PLACE.web curriculum culminated in a 1-week smart classroom activity (described in depth below).

To leverage this student contributed content towards productive opportunities for learning, we developed several micro-scripts that focused student interactions, and facilitated collaborative knowledge construction:

  • Develop-Connect-Explain: A student captures an example of physics in the real world (Develop), tags the example with principles (Connect), and provides a rationale for why the tag applies to the example (Explain).
  • Read-Vote-Connect-Critique: A student reads a peers’ published artifact (Read), votes on the tags (Vote), adds any new tags they feel apply (Connect), and adds their own critique to the collective knowledge artifact (Critique).
  • Revisit-Revise-Vote: A student revisits one of their earlier contributions (Revisit), revises their own thinking and adds their new understanding to the knowledge base (Revise), and votes on ideas and principles that helped in generating their new understanding (Vote).
  • Group-Collective-Negotiate-Develop-Explain: Students are grouped based on their “principle expertise” during the year (Group), browse the visualization to find artifacts in the knowledge base that match their expertise (Collective), negotiate which examples to inform their design of a challenge problem (Negotiate), create the problem (Develop), and finally explains how their principles are reflected in the problem (Explain).

Over the twelve weeks 179 student examples were created with 635 contributed discussion notes, 1066 tags attached, and 2641 votes cast.

Culminating Smart Classroom Activity

The curriculum culminated in a one-week activity where students solved ill-structured physics problems based on excerpts from Hollywood films. The script for this activity consisted of three phases: (1) at home solving and tagging of physics problems; (2) in-class sorting and consensus; and (3) smart classroom activity.

PLACE Culminating Script
PLACE.web Culminating Script (click to enlarge)

In the smart classroom, students were heavily scripted and scaffolded to solve a series of ill-structured physics problems using Hollywood movie clips as the domain for their investigations (i.e., could IronMan Survive a shown fall). Four videos were presented to the students, with the room physically mapped into quadrants (one for each video). The activity was broken up into four different steps: (1) Principle Tagging; (2) Principle Negotiation and Problem Assignment; (3) Equation Assignment, and Assumption and Variable Development; and (4) Solving and Recording (Figure 3).

PLACE smart classroom imagesAt the beginning of Step 1, each student was given his or her own Android tablet, which 
displayed the same subset of principles assigned from the homework activity. Students freely chose a video location in the room and watched a Hollywood video clip, “flinging” (physically “swiping” from the tablet) any of their assigned principles “onto” the video wall that they felt were illustrated or embodied in that clip. They all did this four times, thus adding their tags to all four videos.

In Step 2, students were assigned to one video (a role for the S3 agents, using their tagging activity as a basis for sorting), and tasked with coming to a consensus (i.e., a “consensus script”) concerning all the tags that had been flung onto their video in Step 1 – using the large format displays. Each group was then given a set of problems, drawn from the pool of problems that were tagged during the in-class activity (selected by an S3 agent, according to the tags that group had settled on – i.e., this was only “knowable” to the agents in real-time). The group’s task was to select from that set of problems any that might “help in solving the video clip problem.”

In Step 3, students were again sorted and tasked with collaboratively selecting equations (connected to the problems chosen in Step 2), for approaching and solving the problem, and developing a set of assumptions and variables to “fill in the gaps”. Finally in Step 4, students actually “solved” the problem, using the scaffolds developed by groups who had worked on their video in the preceding steps, and recording their answer using one of the tablets’ video camera – which was then uploaded.

Orchestrating Real-Time Enactment
PLACEweb Students At Board

Several key features (as part of the larger S3 framework) were developed in order to support the orchestration of the live smart classroom activity – below I describe each and their implementation within the PLACE.web culminating activity:

Ambient Feedback: A large Smartboard screen at the front of the room (i.e, not one of the 4 Hollywood video stations) provided a persistent, passive representation of the state of individual, small group, and whole class progression through each step of the smart classroom activity. This display showed and dynamically updated all student location assignments within the room, and tracked the timing of each activity, using three color codes (a large color band around the whole board that reflected how much time was remaining): “green” (plenty of time remaining), “yellow” (try to finish up soon), and “red” (you should be finished now)

Scaffolded Inquiry Tools and Materials: In order for students to effectively engage in the activity and with peers, there is a need for specific scaffolding tools and interfaces through which students interact, build consensus, and generate ideas as a knowledge community (i.e., personal tablets, interactive whiteboards). Two main tools were provided to students, depending on their place in the script: individual tablets connected to their S3 user accounts; and four large format interactive displays that situated the context (i.e., the Hollywood video), providing location specific aggregates of student work, and served as the primary interface for collaborative negotiation

Real-Time Data Mining and Intelligent Agency:To orchestrate the complex flow of materials and students within the room, a set of intelligent agents were developed. The agents, programmed as active software routines, responded to emergent patterns in the data, making orchestration decisions “on-the-fly,” and providing teachers and students with timely information. Three agents in particular were developed: (1) The Sorting agent sorted students into groups and assigned room locations. The sorting was based on emergent patterns during enactment (2) The Consensus Agent monitored groups requiring consensus to be achieved among members before progression to the next step; (3) The Bucket Agent coordinated the distribution of materials to ensure all members of a group received an equal but unique set of materials (i.e., problems and equations in Steps 2 & 3).

Locational and Physical Dependencies: Specific inquiry objects and materials could be mapped to the physical space itself (i.e., where different locations could have context specific materials, simulations, or interactions), allowing for unique but interconnected interactions within the smart classroom. Students “logged into” one of four spaces in our room (one for each video), and their actions, such as “flinging” a tag, appeared on that location’s collaborative display. Students’ location within the room also influenced the materials that were sent to their tablet. In Step 2, students were provided with physics problems based on the tags that had been assigned to their video wall, and in Step 3 they were provided with equations based on their consensus about problems in Step 2.

Teacher Orchestration: The teacher plays a vital role in the enactment of such a complex curriculum. Thus, it is critical to provide him or her with timely information and tools with which to understand the state of the class and properly control the progression of the script. We provided the teacher with an “orchestration tablet” that updated him in real-time on individual groups’ progress within each activity. Using his tablet, the teacher also controlled when students were re-sorted – i.e., when the script moved on to the next step. During Step 3, the teacher was alerted on his tablet whenever the students in a group had submitted their work (variables and assumptions)

Non-Standard Bodies

Non-Standard Bodies

Non-Standard Bodies Dress Front

This project was part of a 4 month exhibition at the Ontario Science Center’s !dea Gallery as part of Mirror, Mirror…Reflections on Body Image

We are constantly assailed with body images and standards in which we as individuals and as a society must fit – are you a small, a medium, a large? How short is your dress? What does your neckline say about you? These questions are at the heart of Non-Standard Bodies. The project, initially conceived as a way of exploring the influence of external factors on the comfort and presentation of the individual, has come to tell a story about the impact of standards on our daily lives and the impact of remote decisions on our perception and presentation of self.

Except in the case of bespoke clothing, the garments we wear are traditionally based on standardized sizes, sometimes reflecting national or international decisions, sometimes reflecting the decisions of individual clothing designers and manufacturers. These garments cannot, by necessity, be a perfect fit for each wearer. They are, instead, good enough, aiming to reflect a reasonably popular or common set of measurements.

This idea of garment sizes being imposed from the outside, by invisible hands, is one persistent with the principles of standardization. Standards setting has, historically, been the province of experts, educated individuals situated within official organizations. These twin ideas of outside influences and “one size fits none” standards are the themes running through Non-Standard Bodies. The project, from an abstract viewpoint, is a physical manifestation of the invisible hands of standardization making decisions about the appearance, presentation and bodyimage of its wearer. Practically speaking, Non-Standard Bodies is an adjustable dress. It is, in its ground state, large and voluminous, beige cotton cloth, fashioned after a monk’s habit and worn over a structural plastic frame.

The dress has both a wearer and a user. The two functions, unlike with normal clothing, are distinct. The user becomes the subject, the wearer the object. This metaphorical representation of standards setting (with the user as subject at a distance) takes place through the manipulation of the fit of the dress. The fit of the dress is manipulated through the adjustment of a series of controls, arrayed along its spine (and therefor inaccessible by the wearer). These controls provide input to an Arduino microcontroller, which manipulates a number of motors. Those motors wind up spools of cord, which lift the hem of the dress, shorten its sleeves and adjust the fit of its waist. Thus, through activities unseen by the wearer (because, in fact, the user is behind her), her appearance and presentation of self are changed. This is the metaphorical representation which runs through the heart of the work and presents, in an evocative and whimsical way, the issue of the politics of standardized clothing sizes.

Radical Design Workshop

Radical Design Workshop

This workshop was designed as part of the Knowledge Media Design Institute‘s (KMDI) Radical Design Series entitled: Malleable Designs – Using Play-Doh to Design the Future.

The goal of the workshop was to engage the participants in thinking about how the future of communicative technologies will provide new ways for people to connect, share, and grow in their communities. Participants used Play-Doh as a medium to articulate and develop their ideas both tactilely and visually. Play-Doh was especially effective for this as the medium itself is so malleable and flexible that the participants didn’t have to conform their thinking to rigid structural limitations.

The workshop combined open, guided discussion and hands-on investigations with the Play-Doh. As a group we looked at 4 main themes: 1) What does it mean to communicate? 2) How do we define our communication networks and communities? 3) What is a disruptive technology? And how are current disruptive technologies changing the way we connect and relate to each other? 4) What do we envision the future landscape of communicative and community technologies to be?

Rock, Paper, Awesome goes live!

Yesterday we successfully completed the first test run of Rock, Paper, Awesome! (RPA), extending the Sail Smart Space framework into the realm of spatial, tangible, and distributed interactions.

The Run

We set up two “stations” at OISE, one on the 3rd floor and one on the 11th floor. Players challenged each other to a game of rock, paper, scissors (see the video below).

Each location had different affordances for tangible, audible, and visual awareness to give the players sensorially unique experiences that conveyed the same message. At the third floor location, a “servo motor” swung a dial to let the player know a challenger was waiting to play. At the eleventh floor location, an LED flashed to convey the challenge. We have tested other designs (not shown here) that used proximity sensors to detect where players were within a room, using their location to trigger an event (such as choosing rock). In another instance, a light sensor conveyed one player’s availability to other players (in remote locations) when the lights in the original player’s room were on.

The Theory

To us, RPA is more than just a game of rock, paper, scissors; it is an avenue for us to begin investigating novel ways for users to interact with the world, and for connecting these investigations within a broader knowledge community. We aim to not only connect these communities, but also to add a layer of user-contributed design to their interactions, where community members engage in creative fabrication and exchange of tangible, interactive media that reflect their ideas, workflow or presence, bridging the distances and connecting the community.

Moving forward, there are some critical questions that are guiding our research into these new spaces:

  • How can we bring such communities more closely together?
  • What are the possible roles for tangible and physical computing, and ambient or interactive media that are deeply connected to the semantics, workflow, physical presence, ideas, activities, and interests of the distributed communities?

We are currently sending out kits, first versions of the code, and design docs to labs at the Learning Technologies Group at the University of Chicago, and Intermedia at the University of Oslo. We are excited to see how they develop and contribute new interactive designs that represent their own representations of space and meaning within the game.
[vslider name=’vslider_options’]

The Technology

The physical interactions and ambient feedback is handled by an Arduino microcontroller. The Arduino allows users to develop a wide array of inputs (e.g. proximity, light, and sound sensors, buttons and levers), and outputs (e.g. sound, light, movement).

Using the S3 framework, RPA facilitates the different game “events” (e.g. joining the game, choosing Rock) by sending messages over an XMPP chatroom (conference). We originally attempted to implement these messages over the XMPP server only using the Arduino  – however, given the relatively limited amount of RAM on the Arduino board (2KB) this turned out to be overly restrictive and we started looking at other solutions.

As a solution to this issue, we ended up making a simplified set of event messages (single text characters) that were sent over the Arduino’s serial port to a connected computer. For testing purposes we used a laptop; however, in permanent installations, we envision RPA having a more compact and flexible setup. In order to achieve this, we connected the Arduino board to a Raspberry Pi. The benefits of the Raspberry Pi is that it is small and cheap, allowing us to dedicate a Pi for each game installation, and to have the “brains” of RPA be as unobtrusive as possible.

In order to connect the various RPA installations we use node.js as an intermediary between the XMPP chatroom and RaspberryPI. Messages that are posted to the XMPP chatroom are picked up by the node.js server and sent over serial port to the Arduino, which then executes the user-designed action, such as turning on a light or playing a chime. Respectively, any event trigger on the Arduino (e.g. a button is pressed), is sent over the serial port to node.js and translated into a XMPP message.

Sample Arduino code for RPA and the node.js setup code can all be freely downloaded, tinkered with and customized from github.

Cross-posted from EncoreLab.org

Multiple Screens For Multiple Uses and the Growth of HCI for Education

Multiple Screens For Multiple Uses and the Growth of HCI for Education

Multiple Screens by Siddartha ThotaThis post is about an article I recently read (really you can skip directly to the Google slideshow if you want) which really makes me feel good about a lot of the things we’ve been doing over the past four years in understanding what it means to connect students in a smart classroom across a wide variety of devices, displays, and even locations and contexts.

Our earlier work showed us that when collaborating students tended to do so more effectively using larger format displays (as huddling around small screens tended to make some students get pushed to the fringe and prevented them from taking part in the discourse). We also found that large displays were great for the teacher in seeing the work of the class at-a-glance (versus on small screens) and in discussing it with larger groups (http://goo.gl/Mnvd5).

I agree that smaller devices, increasingly tablets for their increased portability (can you believe that the iPad only debuted in late 2010!), are better suited for individual contributions – or as a “starting off point”.

This highlights a big unofficial theme of this year’s International Conference of the Learning Science (http://www.isls.org/icls2012/) was the emergence of HCI for learning – the idea that we need to understand and start seriously thinking about and researching how these new technologies (and their respective affordances) can best help us aid students to achieve new forms of learning and collaboration that were previously unachievable.

It’s an exciting time to be an educational researcher – so long as we continue to ask the tough questions about how these technologies (and how students intact with them) specifically aid in learning and not just implementing them for technology’s sake we’ll be just fine 😉

*note this is a cross published article with Google+ which has to happen this way until they get their public API in order

Envisioning the future through non-traditional design

Envisioning the future through non-traditional design

Last month I had a chance to run a workshop I designed as part of the Radical Design Series entitled: Malleable Designs – Using Play-Doh to Design the Future, in conjunction with the Knowledge Media Design Institute (KMDI) and the University of Toronto.

The goal of the workshop was to engage the participants in thinking about how the future of communicative technologies will provide new ways for people to connect, share, and grow in their communities. Participants used Play-Doh as a medium to articulate and develop their ideas both tactilely and visually. Play-Doh was especially effective for this as the medium itself is so malleable and flexible that the participants didn’t have to conform their thinking to rigid structural limitations.

The workshop combined open, guided discussion and hands-on investigations with the Play-Doh. As a group we looked at 4 main themes: 1) What does it mean to communicate? 2) How do we define our communication networks and communities? 3) What is a disruptive technology? And how are current disruptive technologies changing the way we connect and relate to each other? 4) What do we envision the future landscape of communicative and community technologies to be?

[portfolio_slideshow navpos=disabled align=center]