RPA – Developing a Framework for Tangible and Embodied Interactions

RPA – Developing a Framework for Tangible and Embodied Interactions

Rock, Paper, Awesome!

Rock, Paper, Awesome (RPA) was Encore Lab’s initial foray into developing the means for tangible and embodied interactions that would connect to our S3 technology framework. The goal for RPA was simple; individual labs could create their own unique tangible or embodied interactions through which they played rock, paper, scissors with other labs that were physically distributed around the world.

The Theory

We chose rock, paper, and scissors as our test-bed because it provided us with not only a well-defined set of semantics (i.e., win, lose, draw, player ready, player gone), but also a very loose coupling in how we enacted those semantics. For instance how a player chose “rock” in one space could be entirely different from how a lab chose it in another (e.g., standing in a particular spot in the room, versus pushing a button). This allowed us to think deeply about what it meant to convey the same message through various tangible and embodied interactions, and to begin building an understanding of how these different interactions affected the meaning making of the participants. In essence we built a “reverse tower of babel” where multiple languages could all be interpreted through S3, allowing recipients at both ends to effectively communicate through their own designs.

Screen Shot 2013-12-02 at 10.14.47 PM

In this way, RPA is more than just a game of rock, paper, scissors – it is an avenue for us to begin investigating novel ways for users to interact with the world, and for connecting these investigations within a broader knowledge community. We aim to not only connect these communities, but also to add a layer of user-contributed design to their interactions, where community members engage in creative fabrication and exchange of tangible, interactive media that reflect their ideas, workflow or presence, bridging the distances and connecting the community.

Three critical questions guided our development of RPA and this component of S3 in general:

  • How can we bring distributed communities together through tangible and embodied interactions?
  • What are the possible roles for tangible and physical computing, and ambient or interactive media that are deeply connected to the semantics, workflow, physical presence, ideas, activities, and interests of the distributed communities?
  • How does the temporality of the interactions (synchronous versus asynchronous) determine the selection of appropriate kinds of interactions and representations?

We are currently sending out kits, first versions of the code, and design documents to labs at the Learning Technologies Group at the University of Chicago, and Intermedia at the University of Oslo. We are excited to see how they will develop and contribute new interactive designs that represent their own representations of space and meaning within the game.

The Technology

The physical interactions and ambient feedback is handled by an Arduino microcontroller. The Arduino allows users to develop a wide array of inputs (e.g. proximity, light, and sound sensors, buttons and levers), and outputs (e.g. sound, light, movement). Using the S3 framework, RPA facilitates different game “events” (e.g., joining the game, choosing Rock) by sending messages over an XMPP chatroom (conference). We originally attempted to implement these messages over the XMPP server only using the Arduino  – however, given the relatively limited amount of RAM on the Arduino board (2KB) this turned out to be overly restrictive and we started looking at other solutions.

As a solution to this issue, we made a simplified set of event messages (i.e., single text characters) that were sent over the Arduino’s serial port to a connected computer. For testing purposes we used a laptop. However, in permanent installations, we envision RPA having a more compact and flexible setup. In order to achieve this, we connected the Arduino board to a Raspberry Pi. The benefits of the Raspberry Pi is that it is small and cheap, allowing us to dedicate a Pi for each game installation, and to have the “brains” of RPA be as unobtrusive as possible.

In order to connect the various RPA installations we use node.js as an intermediary between the XMPP chatroom and RaspberryPI. Messages that are posted to the XMPP chatroom are picked up by the node.js server and sent over serial port to the Arduino, which then executes the user-designed action, such as turning on a light or playing a chime. Respectively, any event trigger on the Arduino (e.g. a button is pressed), is sent over the serial port to node.js and translated into a XMPP message.

Sample Arduino code for RPA and the node.js setup code can all be freely downloaded, tinkered with and customized from github.

The Run

We set up two “stations” at OISE, one on the third floor and one on the 11th floor. Players challenged each other to a game of rock, paper, scissors (see the video below).

Each location had different tangible, audible, and visual inputs and outputs providing players unique multi-modal experiences that conveyed the same message. At the third floor location, a “servo motor” swung a dial to let the player know a challenger was waiting to play. At the eleventh floor location, an LED flashed to convey the challenge. We have tested other designs (not shown here) that used proximity sensors to detect where players were within a room, using their location to trigger an event (such as choosing rock). In another instance, a light sensor conveyed one player’s availability to other players (in remote locations) when the lights in the original player’s room were on.

Going Live! RPA at TEI 2013

We submitted RPA to TEI 2013′s student design challenge. The conference was held in Barcelona Spain and provided an ideal opportunity for us to try out RPA (and S3) in a live setting with users who had no experience with it. We had stations running at the site and at labs site running in Toronto allowing us to observe a wide range of interactions and gain feedback from participants. We also added a new layer to RPA which connected a real-time visualization of win/lose/draw results to the game – although this visualization duplicated some of the functionality of the tangible RPA elements it did represent a significant step in merging the tangible elements of S3 with a key element of the existing architecture.

 

PLACE.Web – Orchestrating Smart Classroom Knowledge Communities

PLACE.Web – Orchestrating Smart Classroom Knowledge Communities

PLACE.web (Physics Learning Across Contexts and Environments) is a 13-week high school physics curriculum in which students capture examples of physics in the world around them (through pictures, videos, or open narratives), which they then explain, tag, and upload to a shared social space. Within this knowledge community, peers are free to respond, debate, and vote on the ideas presented within the examples towards gaining consensus about the phenomena being shown, empowering students to drive their own learning and sense making. We also developed a visualization of student work that represented student ideas as a complex interconnected web of social and semantic relations, allowing students to filter the information to match their own interests and learning needs, and a teacher portal for authoring tasks (such as multiple choice homework) and reviewing and assessing individual student work. Driven by the KCI Model the goal of PLACE.Web was to create an environment in which the class’ collective knowledge base was ubiquitously accessible – allowing students to engage with the ideas of their peers spontaneously and across multiple contexts (at home, on the street, in class, in a smart classroom). The PLACE.web curriculum culminated in a 1-week smart classroom activity (described in depth below).

To leverage this student contributed content towards productive opportunities for learning, we developed several micro-scripts that focused student interactions, and facilitated collaborative knowledge construction:

  • Develop-Connect-Explain: A student captures an example of physics in the real world (Develop), tags the example with principles (Connect), and provides a rationale for why the tag applies to the example (Explain).
  • Read-Vote-Connect-Critique: A student reads a peers’ published artifact (Read), votes on the tags (Vote), adds any new tags they feel apply (Connect), and adds their own critique to the collective knowledge artifact (Critique).
  • Revisit-Revise-Vote: A student revisits one of their earlier contributions (Revisit), revises their own thinking and adds their new understanding to the knowledge base (Revise), and votes on ideas and principles that helped in generating their new understanding (Vote).
  • Group-Collective-Negotiate-Develop-Explain: Students are grouped based on their “principle expertise” during the year (Group), browse the visualization to find artifacts in the knowledge base that match their expertise (Collective), negotiate which examples to inform their design of a challenge problem (Negotiate), create the problem (Develop), and finally explains how their principles are reflected in the problem (Explain).

Over the twelve weeks 179 student examples were created with 635 contributed discussion notes, 1066 tags attached, and 2641 votes cast.

Culminating Smart Classroom Activity

The curriculum culminated in a one-week activity where students solved ill-structured physics problems based on excerpts from Hollywood films. The script for this activity consisted of three phases: (1) at home solving and tagging of physics problems; (2) in-class sorting and consensus; and (3) smart classroom activity.

PLACE Culminating Script
PLACE.web Culminating Script (click to enlarge)

In the smart classroom, students were heavily scripted and scaffolded to solve a series of ill-structured physics problems using Hollywood movie clips as the domain for their investigations (i.e., could IronMan Survive a shown fall). Four videos were presented to the students, with the room physically mapped into quadrants (one for each video). The activity was broken up into four different steps: (1) Principle Tagging; (2) Principle Negotiation and Problem Assignment; (3) Equation Assignment, and Assumption and Variable Development; and (4) Solving and Recording (Figure 3).

PLACE smart classroom imagesAt the beginning of Step 1, each student was given his or her own Android tablet, which 
displayed the same subset of principles assigned from the homework activity. Students freely chose a video location in the room and watched a Hollywood video clip, “flinging” (physically “swiping” from the tablet) any of their assigned principles “onto” the video wall that they felt were illustrated or embodied in that clip. They all did this four times, thus adding their tags to all four videos.

In Step 2, students were assigned to one video (a role for the S3 agents, using their tagging activity as a basis for sorting), and tasked with coming to a consensus (i.e., a “consensus script”) concerning all the tags that had been flung onto their video in Step 1 – using the large format displays. Each group was then given a set of problems, drawn from the pool of problems that were tagged during the in-class activity (selected by an S3 agent, according to the tags that group had settled on – i.e., this was only “knowable” to the agents in real-time). The group’s task was to select from that set of problems any that might “help in solving the video clip problem.”

In Step 3, students were again sorted and tasked with collaboratively selecting equations (connected to the problems chosen in Step 2), for approaching and solving the problem, and developing a set of assumptions and variables to “fill in the gaps”. Finally in Step 4, students actually “solved” the problem, using the scaffolds developed by groups who had worked on their video in the preceding steps, and recording their answer using one of the tablets’ video camera – which was then uploaded.

Orchestrating Real-Time Enactment
PLACEweb Students At Board

Several key features (as part of the larger S3 framework) were developed in order to support the orchestration of the live smart classroom activity – below I describe each and their implementation within the PLACE.web culminating activity:

Ambient Feedback: A large Smartboard screen at the front of the room (i.e, not one of the 4 Hollywood video stations) provided a persistent, passive representation of the state of individual, small group, and whole class progression through each step of the smart classroom activity. This display showed and dynamically updated all student location assignments within the room, and tracked the timing of each activity, using three color codes (a large color band around the whole board that reflected how much time was remaining): “green” (plenty of time remaining), “yellow” (try to finish up soon), and “red” (you should be finished now)

Scaffolded Inquiry Tools and Materials: In order for students to effectively engage in the activity and with peers, there is a need for specific scaffolding tools and interfaces through which students interact, build consensus, and generate ideas as a knowledge community (i.e., personal tablets, interactive whiteboards). Two main tools were provided to students, depending on their place in the script: individual tablets connected to their S3 user accounts; and four large format interactive displays that situated the context (i.e., the Hollywood video), providing location specific aggregates of student work, and served as the primary interface for collaborative negotiation

Real-Time Data Mining and Intelligent Agency:To orchestrate the complex flow of materials and students within the room, a set of intelligent agents were developed. The agents, programmed as active software routines, responded to emergent patterns in the data, making orchestration decisions “on-the-fly,” and providing teachers and students with timely information. Three agents in particular were developed: (1) The Sorting agent sorted students into groups and assigned room locations. The sorting was based on emergent patterns during enactment (2) The Consensus Agent monitored groups requiring consensus to be achieved among members before progression to the next step; (3) The Bucket Agent coordinated the distribution of materials to ensure all members of a group received an equal but unique set of materials (i.e., problems and equations in Steps 2 & 3).

Locational and Physical Dependencies: Specific inquiry objects and materials could be mapped to the physical space itself (i.e., where different locations could have context specific materials, simulations, or interactions), allowing for unique but interconnected interactions within the smart classroom. Students “logged into” one of four spaces in our room (one for each video), and their actions, such as “flinging” a tag, appeared on that location’s collaborative display. Students’ location within the room also influenced the materials that were sent to their tablet. In Step 2, students were provided with physics problems based on the tags that had been assigned to their video wall, and in Step 3 they were provided with equations based on their consensus about problems in Step 2.

Teacher Orchestration: The teacher plays a vital role in the enactment of such a complex curriculum. Thus, it is critical to provide him or her with timely information and tools with which to understand the state of the class and properly control the progression of the script. We provided the teacher with an “orchestration tablet” that updated him in real-time on individual groups’ progress within each activity. Using his tablet, the teacher also controlled when students were re-sorted – i.e., when the script moved on to the next step. During Step 3, the teacher was alerted on his tablet whenever the students in a group had submitted their work (variables and assumptions)

Roadshow

Roadshow

Developed primarily as a tool to engage participants at poster sessions at conferences, Roadshow provides participants and attendees a way to more collectively engage with the ideas and discussion generated during the session, and to generate more collaborative and connected ideas for follow-up discussion among the whole session.

As an extension to the Sail Smart Space (S3) Framework, Roadshow enables users to create a pop-up social collaboration space that can be indexed to physical locations, facilitating discussion, idea exchange, and the development of a shared taxonomy (tags) within an emergent knowledge base. All of these interactions are then broadcast on an interactive aggregated screen that shows all of the contributed work within the network and filter items by location, contribution type, and tags.

In essence Roadshow can let you quickly author and deploy and set of ad-hoc social networks that aggregate information both within their individual defined social spaces and across the spaces in a shared central pool.

When an individual creates an instance of Roadshow they have several options that they can customize to help guide the interactions. The author can define the number of locations that exist within the network, give them each unique names (e.g., “Poster 1” or “Collaborative Tablet Applications Talk”); define the types of

contributions that participants can make when contributing (e.g., “Question”, “Comment” or “Critique”); and pre-seed tags that may help focus participants thinking (e.g., “Collaboration”, “Learning Goals”, “HCI Considerations”, or “Key Points”).

Roadshow was designed to allow for maximum flexibility in regards to devices that could be used – using responsive web techniques we made it possible to use Roadshow on any mobile device (except Blackberry) or laptop. This mean that users weren’t confined to specific technologies, thus reducing barriers to participation.

Once logged in users could see the contributions of every other member within the network filterable by location. Users could then add their own contributions to the collective knowledge base. Each contribution was also tagged by the user – these tags were a combination of the pre-seeded tags described above and tags organically added by users. When users added their own tags to a post that tag was propagated to every other tablet in the space in real-time helping participants made new connections between their ideas and spaces (users who came later on would also see all the emergent tags).

Finally discussion could take place using the large aggregate display as a mediator and avenue for organizing and filetering ideas. Here users (or a central mediator) could drag the different contributions around the board making “collections” of ideas in order to find themes, topics of interest, or points of conflict. The final layout of the interactive board could be saved and recalled later for future discussion or reworking.

Although Roadshow is still very much in its infancy there are several avenues that we are exploring for future iterations. Primary amongst these are creating more dynamic interactions patterns between individual contributors and their both immediate social spaces (individual locations) and the broader network (whole room), thinking about how we can get individual from one space to connect and build on the ideas of participants in others to get a greater sense of how their ideas connect and contrast towards building new opportunities for knowledge construction. Additionally, similar to other work with S3, we want to think about how the inclusion of Ambient technologies can give participants a greater sense of community belonging, and feelings of spatial relevance and embodiment; and how the inclusion of intelligent software agents can help in the spatial coordination of people in these spaces and in facilitating the productive interaction patterns described above.

Rock, Paper, Awesome goes live!

Yesterday we successfully completed the first test run of Rock, Paper, Awesome! (RPA), extending the Sail Smart Space framework into the realm of spatial, tangible, and distributed interactions.

The Run

We set up two “stations” at OISE, one on the 3rd floor and one on the 11th floor. Players challenged each other to a game of rock, paper, scissors (see the video below).

Each location had different affordances for tangible, audible, and visual awareness to give the players sensorially unique experiences that conveyed the same message. At the third floor location, a “servo motor” swung a dial to let the player know a challenger was waiting to play. At the eleventh floor location, an LED flashed to convey the challenge. We have tested other designs (not shown here) that used proximity sensors to detect where players were within a room, using their location to trigger an event (such as choosing rock). In another instance, a light sensor conveyed one player’s availability to other players (in remote locations) when the lights in the original player’s room were on.

The Theory

To us, RPA is more than just a game of rock, paper, scissors; it is an avenue for us to begin investigating novel ways for users to interact with the world, and for connecting these investigations within a broader knowledge community. We aim to not only connect these communities, but also to add a layer of user-contributed design to their interactions, where community members engage in creative fabrication and exchange of tangible, interactive media that reflect their ideas, workflow or presence, bridging the distances and connecting the community.

Moving forward, there are some critical questions that are guiding our research into these new spaces:

  • How can we bring such communities more closely together?
  • What are the possible roles for tangible and physical computing, and ambient or interactive media that are deeply connected to the semantics, workflow, physical presence, ideas, activities, and interests of the distributed communities?

We are currently sending out kits, first versions of the code, and design docs to labs at the Learning Technologies Group at the University of Chicago, and Intermedia at the University of Oslo. We are excited to see how they develop and contribute new interactive designs that represent their own representations of space and meaning within the game.
[vslider name=’vslider_options’]

The Technology

The physical interactions and ambient feedback is handled by an Arduino microcontroller. The Arduino allows users to develop a wide array of inputs (e.g. proximity, light, and sound sensors, buttons and levers), and outputs (e.g. sound, light, movement).

Using the S3 framework, RPA facilitates the different game “events” (e.g. joining the game, choosing Rock) by sending messages over an XMPP chatroom (conference). We originally attempted to implement these messages over the XMPP server only using the Arduino  – however, given the relatively limited amount of RAM on the Arduino board (2KB) this turned out to be overly restrictive and we started looking at other solutions.

As a solution to this issue, we ended up making a simplified set of event messages (single text characters) that were sent over the Arduino’s serial port to a connected computer. For testing purposes we used a laptop; however, in permanent installations, we envision RPA having a more compact and flexible setup. In order to achieve this, we connected the Arduino board to a Raspberry Pi. The benefits of the Raspberry Pi is that it is small and cheap, allowing us to dedicate a Pi for each game installation, and to have the “brains” of RPA be as unobtrusive as possible.

In order to connect the various RPA installations we use node.js as an intermediary between the XMPP chatroom and RaspberryPI. Messages that are posted to the XMPP chatroom are picked up by the node.js server and sent over serial port to the Arduino, which then executes the user-designed action, such as turning on a light or playing a chime. Respectively, any event trigger on the Arduino (e.g. a button is pressed), is sent over the serial port to node.js and translated into a XMPP message.

Sample Arduino code for RPA and the node.js setup code can all be freely downloaded, tinkered with and customized from github.

Cross-posted from EncoreLab.org

MathRepo

MathRepo

There have been ongoing discussions amongst educational researchers concerning how teachers can support students in making connections between mathematics topics (The National Council of Teachers of Mathematics, 2000). Conventional instruction, with its a sequential presentation of materials in textbooks and the rote completion of problem sets, often fails to help students develop a deep understanding. This is particularly true in regard to the interconnections amongst mathematical concepts, which often come across to students as completely separate topics (Hiebert, 1984).

MathRepoVizIn response, working closely with a school math teacher, we co-designed (Penuel, et al., 2007) a curriculum to engage several small groups of students working in parallel as they “tagged” a common set of math problems. In so doing, a collaborative visualization emerged as the curriculum synthesized the combined tags from all groups. A set of thirty problems developed by the teacher belonged to one or more of four category groups: Algebra & Polynomials, Functions & Relations, Trigonometry, and Graphing Functions. The basic goal of this activity was to help students understand the relationships between these four aspects of mathematics by having them visualize the association of math problems with multiple categories.

Within our S3 classroom, students were automatically grouped and placed at one of the room’s visualization displays, and usinglaptops were asked to “tag” (label) a total of 30 questions. Each group’s display showed a graphical visualization of their collective responses. Students were then asked to collaboratively solve their tagged questions and vote and comment on the validity of other groups’ tags. A central display showed a larger real-time aggregate of the all groups’ tags as a collective association of links. As students voted on these tags, agreements resulted in thicker link lines than those that fostered disagreement.

Preliminary findings, while representing only a small number of participants, showed an upward trend of increasing accuracy and structuredness for the experimental condition. The improved accuracy from the pre-test to the curriculum activity and post-test suggests the importance of how we ask students to make connections to problems, with greater accuracy derived from a collaborative design which shares responsibility. The structuredness, which measured students’ recognition of the connections, shows increasing willingness to characterize math problems from different perspectives.

Overall, students found the visualizations useful in showing different mathematical themes from which a problem could be approached. One student indicated that the visualization was helpful when he could not solve a problem. Students also stated that, over time and with more contributors, the system would become increasingly valuable for studying purposes.

Students also commented that they became more cognizant of the connections amongst mathematics ideas and themes. It is noteworthy that students gained awareness that one could discuss properties of math problems and their relevant themes rather than simply answer them.

Watch the video below:

CSCL 2011 – Great talks and Dim Sum from across the Pacific

CSCL 2011 – Great talks and Dim Sum from across the Pacific

Sorry this one took a bit – between the jet lag and all of us here trying to get three (!!!) projects off the ground and running for the end of September things have been a little hectic over here as of late.

As many of you know, and many of you managed to attend, last month was the biennial conference on Computer Supported Collaborative Learning (CSCL) – and a small band of us headed over the pond to Hong Kong to showcase what we’ve been up to for the past twelve months and have a unique chance to be in the room with some of the best minds in our field and see what they’ve been up to and to share ideas (and the occasional beer).

Here are some of my own experiences at the conference:

It didn’t take long after landing to get into the thick of things as I started the conference off with a workshop on orchestration (How to integrate CSCL in classroom life: Orchestration) chaired by Miguel Nussbaum, Pierre Dillenbourg, Frank Fischer, Chee-Kit Looi, and Jeremy Roschelle.

The workshop centered around whether or not orchestration was the right term for the complex conditions that a teacher must respond to within a live classroom setting and whether or not orchestration, as a metaphor for this, was the right way of looking at it – especially within the context of the role of the teacher as the “conductor” of the class. It was argued that in real-life the conductor does very little during the live show (they wave their hands around to subtly adjust the pace of the performers, but the orchestra is so finely tuned at that point that there should be very little variance), but they are imperative during the rehearsals and preparation – and how adequate does this describe what the teacher actually does (or does something else describe it better?).

Presenting at CSCL 2011

My personal feeling on this is that the metaphor overall works quite well, so long as we don’tpress it’s interpretation too rigidly (avoiding what Baudrillard would call the vanishing point) – it not so much the reality of what the conductor does within the live performance but how we perceive his or her actions in framing our understanding of classroom orchestration. The complex dynamics of a live classroom setting resembles an orchestra (Koller et al. 2011 might offer), as different “players” must coordinate their parts within the enactment – and it is the job of the conductor (teacher) to ensure that these individual pieces come together to create a unified sound (learning goals). In an inquiry-focused curriculum this becomes increasingly important as the outcomes are often unknown and the learning only takes shape through the constant evolution of an emergent script.

The notion of orchestration seems particularly poignant in technology supported learning environments, where students must engage in various contexts (individual/small group/whole class, in class/at home/in the field) and the teacher must have a clear understanding of how the learning is progressing (think of this as having an “ear” for the learning). Technology can play a critical role in giving the teacher insight into the evolving state of knowledge within the class, in order to better adapt activities to the needs of individual students or the whole class. Whether or not we love the metaphor of orchestration, the workshop explored an important question of what information and tools (“instruments”?) we should give to students and teachers, to help capture and represent the knowledge in relevant ways. This question is one that serves as a focus for many of the designs currently underway in our lab.

The workshop also touched on the issue of focus within the construct of orchestration, noting that there are actually two intersecting, and sometimes competing, threads in the development of technology mediated learning spaces: The Learning Sciences aspect of the innovation, and “design” or HCI aspect of the innovations. ENCORE lab also comes up against this tension, as we often find ourselves designing with exciting new technologies, but without a theoretical perspective about learning to drive the design and development, we could end up distracted by the design or HCI side of the innovation. There is a need to find the balance between these two (given the finite resources we all are faced with), which was one takeaway from the workshop that really stuck with me as I move forward into designing my own dissertation environment and materials.

CSCL 2011 Michelle LuiFollowing the workshop, many of us from ENCORE were quite busy. Michelle Lui presented a nice narrative about the evolution of our smart classroom designs, including the aggregation of student data for sense making and teacher orchestration. Cheryl Madeira did an excellent job summarizing her recent work (especially given the last minute scheduling change!) on how technology supported reflection and peer exchange can support teacher professional development. Naxin Zhao and Hedieh Najafi presented their work on how scripted collaboration facilitated knowledge communities in science classrooms. Jim Slotta and Tom Moher (via a cool video production) presented our new work with Embedded Phenomena in a great symposium on embedding CSLC in Classrooms (see link at the bottom). Finally, I had a great opportunity to demo the new SAIL Smart Space (S3) technology framework, using the “Helioroom” materials developed in conjunction with Tom Moher and his group at the University of Illinois, Chicago) – as mentioned by Rebecca Cober in a recent post (link).

We also managed to find a bit of time for fun too, which mostly centered around cool places to eat (when you’re in conference rooms all day, you tend to just want to relax afterwards!) – but two stuck out me in particular. The first was a great dinner at the Pearl on the Peak – a restaurant that sit on the top of Victoria Peak, in Hong Kong, and provided an amazing view of the city. The other was a lunch at a place called Julie’s Kitchen which was literally just a room in someone’s house where we were treated to a 10 course vegetarian meal, which was one of the most memorable I’ve had in years! I did miss out on another truly great experience where a bunch of the crew went to go see wild monkeys in a nearby park… all I can say is that apparently they learned that you never look the monkeys right in the eyes, but I’ll let one of them tell you that tale

CSCL 2011 Team

Smart Classroom Concept Video

This is a short video shows one of our earliest prototype concepts around the kinds of interactions we envision taking place in a fully interactive smart classroom.

This video shows a teacher launching an activity using a multi-touch table, which then displays the activity content on the large format displays throughout the classroom.

The students can move freely throughout the room and capture data on their smartphones.

The students then “connect” with an interactive table which displays each students’ collective artifacts. The students can then discuss and share these artifacts using the multi-touch table as the mediator.