So I’ve been playing Ingress for about a week now (after bailing on it after only a few hours the first time) and it’s pretty cool that it has essentially spawned the existence of a “sub-reality” that is very actively happening unbeknown to the general population, who are going about their daily lives around the city. This is really the most salient version of “augmented reality” that I’ve seen. I’ve tried out many other failed attempts at Augmented Reality (AR) which use mobile cameras to overlay information on the real lanscape. Generally you spend most of your time spinning around trying to get your camera to the exact right position to see the information that someone has tagged to a physical space (like a building) and almost never works right. Instead, Ingress has bypassed the need to orient a camera on specific objects, opting instead for a “mostly” accurate GPS positioning that puts you in the vicinity of real-world objects. You can interact with these objects (which have digitally imprinted information on them) on your smart phone. In the case of Ingress this involves two teams battling for “global supremacy” by taking control of portals that show up in the app on a modified version of Google Maps. Users interact with these portals by clicking on them and then choosing actions such as powering up their own portals (to withstand enemy attacks) or attacking their opponents (to try and take them over for your side).
This is just a smarter way of doing it… and it works surprisingly very well. Google has done making the game inherently and deeply social, and that’s what makes the game so interesting. You can play alone, but your experience will be fairly stunted and so will your progress – you need to work as a team to complete objectives and to help you along the way. What perhaps is even more interesting is suddenly being aware of those around you who are playing – people you only notice once you’re part of the game. Being in an area and having your portal attacked makes you look around to find the other people with their heads in their Android phones in an effort to try and figure out who attacked you. I had one encounter where I was trying to figure out who was attacking me and I saw another guy look up from his phone, smile at me, nod and walk about 20 meter farther away still tapping away at his phone. We were both sharing a rather unique, highly interactive, and deeply social moment, and we were the only two people in a large crowd who knew it – now that’s great AR
It makes me think more about how these kinds of applications might fit in an educational setting and what kind of information we can or should be overlaying within a physical space to augment student-learning practices. Fine-grained tracking of students within a space is very tricky, and therefore learning designs that aim to use such positioning information often struggle to provide meaningful interactions. For many of these projects, designers must address the challenges of balancing the desire for the system to automatically detect students and react to their position versus having students intentionally log into a space to “announce” their presence. In the case of the latter, you reduce the variability of incorrectly positioning the student, but similarly you reduce the spontaneity of simply walking into a location. It also requires providing carefully placed stations for logging in or specific interfaces on the student device (which provide their own risks of students logging into the wrong space).
Some projects however like Ambient Wood, have done some very interesting work in automatically leveraging students’ physical location for unique learning opportunities. In Ambient Wood students conduct investigations in an outdoor wooded area, and their mobile devices served to augment their investigations by providing them with context specific information based on where they were within the woodlands. Ambient Wood actually blends automatic detection in some areas with intentional student driven login at others. What Ambient Wood doesn’t do, which is something that I’ve tried in my own work in projects like neoPLACE and Roadshow (admittedly only with the intentional student centered authentication), is the development of ad-hoc social networks based on location, that is to say connecting the users in real-time to those that occupy a physically and semantically similar space. Through these means we have the opportunity to have students collaborate and build meaning together to potentially connect this meaning making to others dynamically and in real-time.
The semantic aspect I mention above is something that Ingress does really well (with each team having their own representations of the “game state”) and I think has real potential for education. Stephen Graham called these the “invisible spaces” that sit on top of and between the fabric of traditional geographic space – a varied skein of networks that weave through our varied physical spaces. To me this holds promise for designing learner and context specific representations of the learning environment customized to the individual goals of the learner within that space, and to connecting the learner to the information and people that are relevant to them (and perhaps more importantly filtering out what is not, or is simply “noise”).
Imagine multiple students investigating driving inquiry questions within a physical space, receiving timely and context specific tasks on their personal device based on where they are and who else is sharing their space – working with their peers to advance their own understanding and that of the larger knowledge community. As they move through the space, an intelligent software agent tracks and understands their evolving learning pathway, connects them with a new group of students and sends new context-relevant information and specialized overlays about their surroundings to their device. An augmented reality focused on learning where both space and context are deeply interwoven into students interactions – not just a great AR, but a great AR for learning