Combining science inquiry and making for only $5

The awesome folks over at Raspberry Pi (RPi) recently announced their newest version of the RaspberryPI, the Pi Zerro, coming in at only $5!!!

RPi5dollars

To me this really signals a shift in small, inexpensive hobbyist computing and offers huge potential for classroom and informal maker-style learning environments. Up until now if you wanted to get a classroom started with making you would have to shell out $25 for a full-fledged Raspberry Pi, which in addition to being much more expensive is significantly bulkier. And while Adafruit carries the similarly sized Arduino Pro Mini and the Arduino Gemma, both of them are about twice the price (and I know $5 vs. $10 doesn’t seem like much, but when you’re outfitting 40 kids in a classroom it begins to add up!), don’t have much in the way of storage space, and getting them to communicate back and forth over a network can be a huge hassle (as I learned in one of my earlier projects).

Ok, now that we have a pretty powerful computer in a teeny tiny package what do we do with it? Well, one of the things that I’m passionate about is inspiring kids to ask questions about the world around them and engaging in practices that are authentic to Science, Technology, Engineer, and Math (STEM) careers. And while I’ve done some projects (such as the smart prosthetics workshop) where participants designed and built things, they never were released “into the wild” – mostly because until now the hardware was too expensive and it was difficult to store and retrieve the data in a reliable way. This meant that students were often forced to use “black boxed” sensors and devices… and to me this only gives students half the experience.

But at only $5, the Pi Zero has made things very, very interesting!

With its ability to connect to WiFi, having HDMI video out, and the ability to use a wide array of sensors, we finally have a cost-effective way to have kids design and refine the tools that drive their investigations! For instance, we could task a class with designing a tool that tests how “good” their neighborhoods are for growing a garden. We can let them decide if they want to build a tool that captures an area’s sunlight, moisture, pH levels in the soil, or even take pictures to see if there are “predators” (like rabbits that will eat their crops!). By doing this we get to give students agency, which is critical for engaging them in sustained inquiry. And by leveraging the WiFi capabilities,  all the data can be broadcast back into the classroom, to be aggregated and visualized for the students to examine (AT ANY TIME!!!). This is a big deal and offers true ubiquitous “always-on” opportunities for learning.

makerspacekids

Now there are still hurdles to overcome – making some of these tools can take a bit of work and they still have to be programmed – but with the rise of visual programming languages (such as Scratch or Blocky Talky from CU Bolder) we are nearing an exciting point in learning where students are in the driver’s seat throughout the inquiry cycle: asking the driving questions; designing the tools to answer them; implementing them in their classrooms, homes, and neighborhoods; capturing the data; answering questions; and refining their designs.

Personally, I can’t wait to get my hands on a pile of these and start working with teachers and makerspaces to develop some exciting hands-on constructionist activities!

TED-Ed Clubs helps students create their own TED talks

TED-Ed clubs is an exciting new venture by the people over at TED-Ed, which aims at getting students aged 8-18 to develop – with the help of an adult facilitator – their own TED style videos to share their ideas with fellow students all over the world.

To me it seems like a great way to inspire student inquiry, and as a means for getting students to present their ideas to a broader audience of their peers. Sure students could probably just make videos on their own (and many classes do this), but the opportunity to engage globally with peers as a community of engaged learners, sits at the heart of a lot of the transformational learning practices we often advocate for.

Literature in the field of learning sciences points to students often engaging more deeply with content when they know there’s an audience for their work, and I would imagine this is especially true for a brand as well known as TED.

There are a few steps that a club needs to take in order to become a TED-Ed Club, but registering (in addition to being added to the TED-Ed Club network) provides educators with supporting materials and hands-on support with TED-Ed staff.

I’m excited every time I see a new opportunity to support students in learning about topics they are passionate about, and providing them with the support to share that passion with a larger audience. There’s also the corollary benefit of students getting the opportunity to work hands-on with a range of multimedia tools, and develop their speaking and presentation skills in the development of their videos (skills that I feel should not be overlooked as part a student’s academic repertoire).

As a bonus below, I’ve posted a bonus TED video made by a young student on hacking his own education

The importance of narrative in design for children

The importance of narrative in design for children

Last week as part of the Interaction Design and Children Conference (IDC) in NYC Rebecca Cober and I had a chance to take part in a workshop at the New York Hall of Science (NYSCI) entitled: Narrative Contexts as a Design Element. The workshop was organized by Peggy Monahan, and Dorothy Bennett, with special guests Jessie Hopkins and David Glauber from Sesame Street Workshop (JOY!). The goal of the workshop was to look at some of the existing exhibits at NYSCI and try to improve them by adding a rich layer of narrative through the addition of simple low-fi prototypes (e.g., using cardboard, felt, string, or just pen and paper). What was really amazing about the experience was that our designs were for live exhibits. Within minutes we got to see how the narrative elements we added, using felt and cardboard, affected the experience of children and their parents in the museum!

We started by observing children interacting with exhibits in the light and optics area of NYSCI. The light and optics area was chosen by the workshop organizers because most of the displays had little or no narrative elements, making them like “blank canvases” for the workshop participants.  Based on what we had just learned concerning narrative elements (see box below), we discussed what could be altered in each display to make it i) more engaging and ii) more likely that children (and their parents) would think and talk about the underlying scientific principles of each display. Once we had a good sense of which display we wanted to tackle we broke into groups and sat down to the hard work of actually constructing the narrative and building the prototypes.

In thinking about the narrative Jessie and David has us focus on four “narrative ingredients”:

1. The Mood

  • It’s achieved through the visual contexts of the interactions, and through sound, atmosphere, music and other sensory cues.
  • It’s a quick and effective means for connecting with the participants on an emotional level

2. The Protagonist

  • Tell the user who they are and why it matters that they are there. Why is their participation in the story critical for it’s outcome?

3. The Relationships

  • They tell us how and why we matter to others, and should involve at least two people (these people can be both real and/or made up people… or animals, robots etc.)

4. Humor

  • Jokes are always good (especially with children), and especially slapstick over clever jokes (it’s about the audience not you!)
    • A critical question that should always be asked is where are the bananas and underpants!

The project we selected to work on was a large (approx. 2 meter by 2 meter) “light wheel” where a single LED bar flashed, sped up or slowed its rotation, and brightened or dimmed depending on the position of three dials on a control board: Frequency, RPM, and Voltage. During the initial observation we noticed that children (and parents alike) tended to walk up to the exhibit, turn the knobs (apparently) randomly for a few seconds and walk away. There was little to no discussion among participants, and their the “experimentation” lacked any sort of methodology to indicate the children were thinking deeply about the concepts.

Exhibit's dials without scaffolds
Exhibit’s dials without scaffolds

While observing the exhibit, we looked at the patterns the LED bar made while spinning and asked ourselves “If this could be something other than a boring box, what could it be?” We wanted something that would not only get the students to engage with the exhibit, but to do so critically and required them to think about how each of the dials affected the shape made by the flashing LED bar. After several brief discussions Rebecca and I realized that at certain points in during the bar’s rotation the lights looked exactly like a cat’s whiskers! We thought that by making the exhibit a challenge (making the cat’s whiskers align with its face), we could get the kids to more critically focus on how each dial affected the pattern and helped them to achieve the goal.

Whiskers Full Shot
Full shot of Friskers in action

Once we had our concept we set about making the design a reality – We had a little over an hour; so using felt, cardboard, string and glue we began to transform the black screen into a black cat! Thinking about the need to build a narrative we wanted to give participants a reason to play with our exhibit, so we gave the cat the name “Friskers” and built a story around the participants needing to help Friskers find his whiskers (by adjusting the knobs) and Rebecca drew up some large signs to explain this.

Rebecca's Narrative for Friskers
Rebecca’s Narrative for Friskers

With the exhibit up and running we sat down to observe children and their parents with the exhibit. We noticed that immediately children were drawn to the exhibit – running over to it to see what it was excited by the large cat– however they didn’t spend a lot of time really trying to “solve” exhibit, they turned the knobs but didn’t seem to focus on what each meant (and neither did the parents). We wanted to figure out why this was, and we realized that the notions of “Frequency, RPM, and Voltage” were simply too abstract for the children to draw connections to in a short period of time (and even many of the parents). In response we created three scaffolding signs to help children make connections between each term and what they did in terms of the lights (image below).

Once we did that the whole experience changed! Children started spending far more time (often minutes) on the exhibit, and actually discussing it with their friends (and parents)! The scaffolds helped the children better experiment and inquire about the science going on with the exhibit, and also helped parents make connections to each of the terms to start to talk with their kids about it more in-depth.

Friskers dials with instructions
Friskers dials with instructions

The end result was a massive shift in the engagement and discussion around the exhibit, to the point where the NYSCI coordinators wanted to keep it up after our trial! We “demoed” it to the rest of the IDC participants later in the week and one attendee spent over 40 minutes playing with Friskers and talking to others about it. Overall it really drove home to us how critical it is to think about the narrative of the educational interventions that you design, the role of carefully designed scaffolds, and the need to iterate based on watching your designs “in the wild”.

LASI Day #3 – Morning Sessions Smorgasbord

Now onto day 3 at LASI 2013 and a lot happened this morning across three panels and a 45 minute breakout (birds of a feather) session so I’m just going to touch on a few things that really stood to me.

Taylor Martin & Nicole Forsgren Velasquez did a really nice talk about the kinds of learning analytics they are using to understand and evaluate student strategies in problem solving (in particular using a game they have developed called Refraction). What they were able to do is break down student solving strategies into 5 different categories (Slow & Steady, Haphazard, Explorer, Strategic Explorer, and Careful), and perhaps more interesting they were able to understand the results each of these strategies tended to produce (here’s link in a bit to their powerpoint to get the nitty gritty). Phil Winne, followed this up with another really interesting talk about understanding thinking by students and trying to get “into” their heads – both these talks really drove home the point that one thing that LA can do is help us get a sense of how kids are thinking about and acting on/within the curricula we design. The last speaker in the session was Sidney D’Mello and he talked about students’ emotions and learning – in particular he offered the approach of “Learning with Contradictions” – an approach that promotes disequilibration of students to promote reflection and learning and definitely resonates with Manu Kapur’s ideas around Productive Failure (2008). Check out the whole session here.

During the breakout session later in the morning Alyssa Wise organized a group of us around trying to get at the big ideas of “What problems do we think learning analytics can solve? WHOSE problems are they?” I thought this was a great way for us to start really thinking deeply about why we’re at this conference and what we’re trying get as a productive outcome to help us in shaping our research and the field. Judy Kay had our group brainstorm what these questions meant to us, and what kinds of small more focused questions could help us answer the larger ones promoted by Alyssa. For me (which is a bit different than some of the others at the conference) the challenge that I believe LA can help address is during complex real-time enactments, especially in unpredictable inquiry activities. TO his end there is an interesting issue about the “granularity” or the scale of the analytics and the interventions that we want to inform/act upon. To me there are at least 3 that stand out as a baseline:

  • Real-time in class, supporting on the fly decisions about classroom orchestration
  • After or between classes, giving more detailed information about the state of the class’ knowledge or performance to aid in scripting upcoming classes
  • After the course/unit, for assessment and also for self-reflection of the teaching and learning outcomes (this might also be really valuable for students too)

Finally Phil Winne reminds us that students are agents, who make choices throughout learning activities, even unexpected ones.

LASI afternoon workshop – Gooru: Personalized learning with data

The afternoon session at LASI gave us a chance to look at Gooru a “search engine for learning”. It an interesting tool that aims to foster a blended learning platform that allows teachers to pull full curricular activities, or small snippets of activities from both existing repositories (e.g., Khan Academy, National Geographic). Teachers can implement quizzes, interactives, web pages and other rich media content, and their own designs can be uploaded to the repository for other instructors to use or remix. Underneath all of this is a rich (HUGE) amount of tracked user data to give teachers insight into the state of their class (from individual students to whole class information). So much data can be both good and bad though.

Prasad Ram (the project lead), highlighted some of the challenges in this space for practitioners: What’s the granularity that the teacher is interested in? Is it one student? The whole class? What kinds of reports are actually useful to the teacher? How does this data allow teachers to personalize the learning for students?

These questions are not easy ones to answer and in many ways lie at the heart of effective Learning Analytics implementation. One approach for making this data relevant is through effective visualizations – but as many people know good visualizations are hard (and bad visualizations are worse than none at all!). Gooru is still trying to think of all the ways that these might be used and so far they are making headway, although I don’t envy their task. I think this is something that is going to take a lot of work and they will have to tread lightly to not overwhelm the teachers – giving them everything may result in them using nothing (see orchestrational load).

I applaud their really interesting approach to large-scale implementation of such an ambitious platform, but I have to admit that it concerns me a bit about their overall model of approach. It seems that most of their curricular designs fall into the lecture/drill/quiz model (also popular with Khan Academy, but I actually think Gooru does it better) – which puts the learning a bit too much on rails for my tastes. Also the work seems to be very much siloed to individual students (rather than collaborative work) and goes against some of my ideas on the needs to support authentic STEM practices required in today’s “Knowledge Society”. If I were to push I’d like to see how Gooru could use its vast collection of knowledge resources to support students in collaborative inquiry curricula.*

* Talking to Prasad at the end of the talk he mentioned that there are social features built into Gooru but that these elements (for moderation, abuse filtering) haven’t been fully built out yet. I’d be interested into see further how these are done and the kinds of interactions they are supporting, as I see this what would make it really transformational

 

Quick thoughts on the opening sessions of LASI 2013

After listening to a really nice opening panel at LASI 2013 and the idea of Big Data and Learning Analytics (LA) a couple things came to mind about this emerging field.

I’m always worried about the ideas of LA are going to put learning too much “on rails” – that is to say that we automate the process so much that students and teachers are taken out of the decision and learning processes by simply crunching the “data” and making decisions for them. I’m heartened to see that this concern is shared by the panel as well.

Alyssa Wise mentioned that LA needs to be “learner centered”, which I think is vital, even as we begin to gather and process all of this data to make sense of it we need to remember that it’s about the students and all of our practices need to be focused on this and how we can help and enable learners to learn. I was glad that Dan Suthers also pointed out that learning at it’s core is a complex phenomena, but that there is a promise being held out by LA to help us “understanding and manage learning in its full complexity”, and how we can it help optimize learning. My big question and one that I think should be central to this whole conference is what we mean by optimizing learning? A lot of the ideas in this conference is about this idea of optimization and I hope that we continue to discuss/debate what optimization means within complex and varying learning communities and approaches.

This idea by Dan goes very nicely with George Siemens idea of the increase of learner individualization, and with Phil Winne’s of engaging learners as participants in an ecology of experimentation. We want students to be authentic drivers of inquiry, investigation, and knowledge construction and we want to leverage LA as a means of aiding them in these processess – by connecting them new peers, new resources, new ideas that they may have been otherwise “blind” to (similar to what Dan said about weak ties).

My personal hope is that LA does live up to this ideal of really empowering learners to learn in ways that otherwise would be impossible (or prohibitively time consuming), and also critically giving teachers insight into the state of their class’ knowledge (and perhaps deeper information of the “global” state of knowledge) to drive learning and exploration in exciting new ways.

Only one morning in and so far very interesting and exciting – looking forward to the next few days!

Ingress – Finally doing augmented reality right and what it means for education

Ingress – Finally doing augmented reality right and what it means for education

Ingress User Screen
An example of a user’s ingress information screen

So I’ve been playing Ingress for about a week now (after bailing on it after only a few hours the first time) and it’s pretty cool that it has essentially spawned the existence of a “sub-reality” that is very actively happening unbeknown to the general population, who are going about their daily lives around the city. This is really the most salient version of “augmented reality” that I’ve seen. I’ve tried out many other failed attempts at Augmented Reality (AR) which use mobile cameras to overlay information on the real lanscape. Generally you spend most of your time spinning around trying to get your camera to the exact right position to see the information that someone has tagged to a physical space (like a building) and almost never works right. Instead, Ingress has bypassed the need to orient a camera on specific objects, opting instead for a “mostly” accurate GPS positioning that puts you in the vicinity of real-world objects. You can interact with these objects (which have digitally imprinted information on them) on your smart phone. In the case of Ingress this involves two teams battling for “global supremacy” by taking control of portals that show up in the app on a modified version of Google Maps. Users interact with these portals by clicking on them and then choosing actions such as powering up their own portals (to withstand enemy attacks) or attacking their opponents (to try and take them over for your side).

Ingress-Portal-Screen
An example of an Ingress portal interaction screen, where you can power-up your own or attack an enemy portal

This is just a smarter way of doing it… and it works surprisingly very well. Google has done making the game inherently and deeply social, and that’s what makes the game so interesting. You can play alone, but your experience will be fairly stunted and so will your progress – you need to work as a team to complete objectives and to help you along the way. What perhaps is even more interesting is suddenly being aware of those around you who are playing – people you only notice once you’re part of the game. Being in an area and having your portal attacked makes you look around to find the other people with their heads in their Android  phones in an effort to try and figure out who attacked you. I had one encounter where I was trying to figure out who was attacking me and I saw another guy look up from his phone, smile at me, nod and walk about 20 meter farther away still tapping away at his phone. We were both sharing a rather unique, highly interactive, and deeply social moment, and we were the only two people in a large crowd who knew it – now that’s great AR

It makes me think more about how these kinds of applications might fit in an educational setting and what kind of information we can or should be overlaying within a physical space to augment student-learning practices. Fine-grained tracking of students within a space is very tricky, and therefore learning designs that aim to use such positioning information often struggle to provide meaningful interactions. For many of these projects, designers must address the challenges of balancing the desire for the system to automatically detect students and react to their position versus having students intentionally log into a space to “announce” their presence. In the case of the latter, you reduce the variability of incorrectly positioning the student, but similarly you reduce the spontaneity of simply walking into a location. It also requires providing carefully placed stations for logging in or specific interfaces on the student device (which provide their own risks of students logging into the wrong space).

Student engaging in Ambient Wood
An example of a student using a handheld at a station in Ambient Wood

Some projects however like Ambient Wood, have done some very interesting work in automatically leveraging students’ physical location for unique learning opportunities. In Ambient Wood students conduct investigations in an outdoor wooded area, and their mobile devices served to augment their investigations by providing them with context specific information based on where they were within the woodlands. Ambient Wood actually blends automatic detection in some areas with intentional student driven login at others. What Ambient Wood doesn’t do, which is something that I’ve tried in my own work in projects like neoPLACE and Roadshow (admittedly only with the intentional student centered authentication), is the development of ad-hoc social networks based on location, that is to say connecting the users in real-time to those that occupy a physically and semantically similar space. Through these means we have the opportunity to have students collaborate and build meaning together to potentially connect this meaning making to others dynamically and in real-time.

The semantic aspect I mention above is something that Ingress does really well (with each team having their own representations of the “game state”) and I think has real potential for education. Stephen Graham called these the “invisible spaces” that sit on top of and between the fabric of traditional geographic space – a varied skein of networks that weave through our varied physical spaces. To me this holds promise for designing learner and context specific representations of the learning environment customized to the individual goals of the learner within that space, and to connecting the learner to the information and people that are relevant to them (and perhaps more importantly filtering out what is not, or is simply “noise”).

Imagine multiple students investigating driving inquiry questions within a physical space, receiving timely and context specific tasks on their personal device based on where they are and who else is sharing their space – working with their peers to advance their own understanding and that of the larger knowledge community. As they move through the space, an intelligent software agent tracks and understands their evolving learning pathway, connects them with a new group of students and sends new context-relevant information and specialized overlays about their surroundings to their device. An augmented reality focused on learning where both space and context are deeply interwoven into students interactions – not just a great AR, but a great AR for learning

PLACE.Web – Orchestrating Smart Classroom Knowledge Communities

PLACE.Web – Orchestrating Smart Classroom Knowledge Communities

PLACE.web (Physics Learning Across Contexts and Environments) is a 13-week high school physics curriculum in which students capture examples of physics in the world around them (through pictures, videos, or open narratives), which they then explain, tag, and upload to a shared social space. Within this knowledge community, peers are free to respond, debate, and vote on the ideas presented within the examples towards gaining consensus about the phenomena being shown, empowering students to drive their own learning and sense making. We also developed a visualization of student work that represented student ideas as a complex interconnected web of social and semantic relations, allowing students to filter the information to match their own interests and learning needs, and a teacher portal for authoring tasks (such as multiple choice homework) and reviewing and assessing individual student work. Driven by the KCI Model the goal of PLACE.Web was to create an environment in which the class’ collective knowledge base was ubiquitously accessible – allowing students to engage with the ideas of their peers spontaneously and across multiple contexts (at home, on the street, in class, in a smart classroom). The PLACE.web curriculum culminated in a 1-week smart classroom activity (described in depth below).

To leverage this student contributed content towards productive opportunities for learning, we developed several micro-scripts that focused student interactions, and facilitated collaborative knowledge construction:

  • Develop-Connect-Explain: A student captures an example of physics in the real world (Develop), tags the example with principles (Connect), and provides a rationale for why the tag applies to the example (Explain).
  • Read-Vote-Connect-Critique: A student reads a peers’ published artifact (Read), votes on the tags (Vote), adds any new tags they feel apply (Connect), and adds their own critique to the collective knowledge artifact (Critique).
  • Revisit-Revise-Vote: A student revisits one of their earlier contributions (Revisit), revises their own thinking and adds their new understanding to the knowledge base (Revise), and votes on ideas and principles that helped in generating their new understanding (Vote).
  • Group-Collective-Negotiate-Develop-Explain: Students are grouped based on their “principle expertise” during the year (Group), browse the visualization to find artifacts in the knowledge base that match their expertise (Collective), negotiate which examples to inform their design of a challenge problem (Negotiate), create the problem (Develop), and finally explains how their principles are reflected in the problem (Explain).

Over the twelve weeks 179 student examples were created with 635 contributed discussion notes, 1066 tags attached, and 2641 votes cast.

Culminating Smart Classroom Activity

The curriculum culminated in a one-week activity where students solved ill-structured physics problems based on excerpts from Hollywood films. The script for this activity consisted of three phases: (1) at home solving and tagging of physics problems; (2) in-class sorting and consensus; and (3) smart classroom activity.

PLACE Culminating Script
PLACE.web Culminating Script (click to enlarge)

In the smart classroom, students were heavily scripted and scaffolded to solve a series of ill-structured physics problems using Hollywood movie clips as the domain for their investigations (i.e., could IronMan Survive a shown fall). Four videos were presented to the students, with the room physically mapped into quadrants (one for each video). The activity was broken up into four different steps: (1) Principle Tagging; (2) Principle Negotiation and Problem Assignment; (3) Equation Assignment, and Assumption and Variable Development; and (4) Solving and Recording (Figure 3).

PLACE smart classroom imagesAt the beginning of Step 1, each student was given his or her own Android tablet, which 
displayed the same subset of principles assigned from the homework activity. Students freely chose a video location in the room and watched a Hollywood video clip, “flinging” (physically “swiping” from the tablet) any of their assigned principles “onto” the video wall that they felt were illustrated or embodied in that clip. They all did this four times, thus adding their tags to all four videos.

In Step 2, students were assigned to one video (a role for the S3 agents, using their tagging activity as a basis for sorting), and tasked with coming to a consensus (i.e., a “consensus script”) concerning all the tags that had been flung onto their video in Step 1 – using the large format displays. Each group was then given a set of problems, drawn from the pool of problems that were tagged during the in-class activity (selected by an S3 agent, according to the tags that group had settled on – i.e., this was only “knowable” to the agents in real-time). The group’s task was to select from that set of problems any that might “help in solving the video clip problem.”

In Step 3, students were again sorted and tasked with collaboratively selecting equations (connected to the problems chosen in Step 2), for approaching and solving the problem, and developing a set of assumptions and variables to “fill in the gaps”. Finally in Step 4, students actually “solved” the problem, using the scaffolds developed by groups who had worked on their video in the preceding steps, and recording their answer using one of the tablets’ video camera – which was then uploaded.

Orchestrating Real-Time Enactment
PLACEweb Students At Board

Several key features (as part of the larger S3 framework) were developed in order to support the orchestration of the live smart classroom activity – below I describe each and their implementation within the PLACE.web culminating activity:

Ambient Feedback: A large Smartboard screen at the front of the room (i.e, not one of the 4 Hollywood video stations) provided a persistent, passive representation of the state of individual, small group, and whole class progression through each step of the smart classroom activity. This display showed and dynamically updated all student location assignments within the room, and tracked the timing of each activity, using three color codes (a large color band around the whole board that reflected how much time was remaining): “green” (plenty of time remaining), “yellow” (try to finish up soon), and “red” (you should be finished now)

Scaffolded Inquiry Tools and Materials: In order for students to effectively engage in the activity and with peers, there is a need for specific scaffolding tools and interfaces through which students interact, build consensus, and generate ideas as a knowledge community (i.e., personal tablets, interactive whiteboards). Two main tools were provided to students, depending on their place in the script: individual tablets connected to their S3 user accounts; and four large format interactive displays that situated the context (i.e., the Hollywood video), providing location specific aggregates of student work, and served as the primary interface for collaborative negotiation

Real-Time Data Mining and Intelligent Agency:To orchestrate the complex flow of materials and students within the room, a set of intelligent agents were developed. The agents, programmed as active software routines, responded to emergent patterns in the data, making orchestration decisions “on-the-fly,” and providing teachers and students with timely information. Three agents in particular were developed: (1) The Sorting agent sorted students into groups and assigned room locations. The sorting was based on emergent patterns during enactment (2) The Consensus Agent monitored groups requiring consensus to be achieved among members before progression to the next step; (3) The Bucket Agent coordinated the distribution of materials to ensure all members of a group received an equal but unique set of materials (i.e., problems and equations in Steps 2 & 3).

Locational and Physical Dependencies: Specific inquiry objects and materials could be mapped to the physical space itself (i.e., where different locations could have context specific materials, simulations, or interactions), allowing for unique but interconnected interactions within the smart classroom. Students “logged into” one of four spaces in our room (one for each video), and their actions, such as “flinging” a tag, appeared on that location’s collaborative display. Students’ location within the room also influenced the materials that were sent to their tablet. In Step 2, students were provided with physics problems based on the tags that had been assigned to their video wall, and in Step 3 they were provided with equations based on their consensus about problems in Step 2.

Teacher Orchestration: The teacher plays a vital role in the enactment of such a complex curriculum. Thus, it is critical to provide him or her with timely information and tools with which to understand the state of the class and properly control the progression of the script. We provided the teacher with an “orchestration tablet” that updated him in real-time on individual groups’ progress within each activity. Using his tablet, the teacher also controlled when students were re-sorted – i.e., when the script moved on to the next step. During Step 3, the teacher was alerted on his tablet whenever the students in a group had submitted their work (variables and assumptions)

Multiple Screens For Multiple Uses and the Growth of HCI for Education

Multiple Screens For Multiple Uses and the Growth of HCI for Education

Multiple Screens by Siddartha ThotaThis post is about an article I recently read (really you can skip directly to the Google slideshow if you want) which really makes me feel good about a lot of the things we’ve been doing over the past four years in understanding what it means to connect students in a smart classroom across a wide variety of devices, displays, and even locations and contexts.

Our earlier work showed us that when collaborating students tended to do so more effectively using larger format displays (as huddling around small screens tended to make some students get pushed to the fringe and prevented them from taking part in the discourse). We also found that large displays were great for the teacher in seeing the work of the class at-a-glance (versus on small screens) and in discussing it with larger groups (http://goo.gl/Mnvd5).

I agree that smaller devices, increasingly tablets for their increased portability (can you believe that the iPad only debuted in late 2010!), are better suited for individual contributions – or as a “starting off point”.

This highlights a big unofficial theme of this year’s International Conference of the Learning Science (http://www.isls.org/icls2012/) was the emergence of HCI for learning – the idea that we need to understand and start seriously thinking about and researching how these new technologies (and their respective affordances) can best help us aid students to achieve new forms of learning and collaboration that were previously unachievable.

It’s an exciting time to be an educational researcher – so long as we continue to ask the tough questions about how these technologies (and how students intact with them) specifically aid in learning and not just implementing them for technology’s sake we’ll be just fine 😉

*note this is a cross published article with Google+ which has to happen this way until they get their public API in order

MathRepo

MathRepo

There have been ongoing discussions amongst educational researchers concerning how teachers can support students in making connections between mathematics topics (The National Council of Teachers of Mathematics, 2000). Conventional instruction, with its a sequential presentation of materials in textbooks and the rote completion of problem sets, often fails to help students develop a deep understanding. This is particularly true in regard to the interconnections amongst mathematical concepts, which often come across to students as completely separate topics (Hiebert, 1984).

MathRepoVizIn response, working closely with a school math teacher, we co-designed (Penuel, et al., 2007) a curriculum to engage several small groups of students working in parallel as they “tagged” a common set of math problems. In so doing, a collaborative visualization emerged as the curriculum synthesized the combined tags from all groups. A set of thirty problems developed by the teacher belonged to one or more of four category groups: Algebra & Polynomials, Functions & Relations, Trigonometry, and Graphing Functions. The basic goal of this activity was to help students understand the relationships between these four aspects of mathematics by having them visualize the association of math problems with multiple categories.

Within our S3 classroom, students were automatically grouped and placed at one of the room’s visualization displays, and usinglaptops were asked to “tag” (label) a total of 30 questions. Each group’s display showed a graphical visualization of their collective responses. Students were then asked to collaboratively solve their tagged questions and vote and comment on the validity of other groups’ tags. A central display showed a larger real-time aggregate of the all groups’ tags as a collective association of links. As students voted on these tags, agreements resulted in thicker link lines than those that fostered disagreement.

Preliminary findings, while representing only a small number of participants, showed an upward trend of increasing accuracy and structuredness for the experimental condition. The improved accuracy from the pre-test to the curriculum activity and post-test suggests the importance of how we ask students to make connections to problems, with greater accuracy derived from a collaborative design which shares responsibility. The structuredness, which measured students’ recognition of the connections, shows increasing willingness to characterize math problems from different perspectives.

Overall, students found the visualizations useful in showing different mathematical themes from which a problem could be approached. One student indicated that the visualization was helpful when he could not solve a problem. Students also stated that, over time and with more contributors, the system would become increasingly valuable for studying purposes.

Students also commented that they became more cognizant of the connections amongst mathematics ideas and themes. It is noteworthy that students gained awareness that one could discuss properties of math problems and their relevant themes rather than simply answer them.

Watch the video below: