The Editor’s Spotlight, Part 2 — TOCHI Issue 23:4 — Adding Physical Objects to an Interactive Game Improves Learning and Enjoyment

IN THE SPOTLIGHT, Part 2:

Adding Physical Objects to an Interactive Game Improves Learning and Enjoyment

This delightful contribution explores EarthShake, a mixed-reality game that helps children learn some basic principles of physics by bridging the physical and virtual worlds via depth-sensing cameras.

The work includes not only an interactive prototype that is put to the test by 4-8 year old children (a particularly demanding user demographic if ever there was one!), but also through careful experimental design that teases out many insights illustrating how and why the use of three-dimensional (3D) physical objects in mixed-reality environments can produce better learning and enjoyment than flat-screen 2D interaction.

Computer technologies can be especially empowering when brought to bear in the context of the physical environment. This has long been suspected as a benefit of so-called “tangible interfaces”—that is, interfaces employing physical stand-ins or props as proxies for digital objects—yet precisely how, or why, or under what circumstances tangibles might bring benefits has remained murky, particularly when combined with mixed-reality environments, i.e. sensing systems that detect the 3D world and incorporate it directly into the interactive experience. One can hypothesize many possible reasons that tangibles could be beneficial to learners in mixed-reality environments:

Is it the three-dimensional nature of the objects?

Do the potential benefits derive from making interaction more enjoyable?

Or perhaps it is the embedding in reality, and the sensory cues that the real world affords, that forms the critical difference—as compared to watching videos of the same activities, for example.

In addressing these questions, the carefully controlled studies isolate various possible effects and confounds, and thereby convincingly demonstrate many aspects of exactly how these mixed-reality environments benefit learners. The results demonstrate that learning benefits accrue through embodied cognition, improved mental visualization (as evidenced by children’s hand gestures, for example), and via the mere observation of physical phenomena in the full richness of sensory cues available in the real world—cues that are inherently absent when watching a video recording of the same activity on a flat, two-dimensional screen.

 

Nesra Yannier, Scott Hudson, Eliane Wiese, Ken Koedinger. 2016. Adding Physical Objects to an Interactive Game Improves Learning and Enjoyment. ACM Trans. Comput.-Hum. Interact. 23, 4, Article 26 (August 2016), 33 pages.

DOI= http://dx.doi.org/10.1145/2934668

 

TOCHI Article Alert: Two Papers on Brain-Computer Interaction in Issue 23:1

There’s lots to please the eye, ear, and mind in TOCHI Issue 23:1.

And I mean that not only figuratively—in terms of nourishing the intellect—but quite literally, in terms of those precious few cubic centimeters of private terrain residing inside our own skulls.

Because brain-computer interaction (BCI) forms a major theme of Issue 23:1. The possibility of sensing aspects of human perception, cognition, and physiological states has long fascinated me—indeed, the very term “brain-computer interaction” resonates with the strongest memes that science fiction visionaries can dish up—yet this topic confronts us with a burgeoning scientific literature.

* * *

The first of these articles presents an empirical study of phasic brain wave changes as a direct indicator of programmer expertise.

It makes a strong case that EEG-based measures of cognitive load, as it relates to expertise, can be observed directly (rather than through subjective assessments) and accurately measured when specifically applied to program comprehension tasks.

By deepening our ability to understand and to quantify expertise, the paper makes significant inroads on this challenging problem.

(http://dx.doi.org/10.1145/2829945).

* * *

The second BCI article explores ways to increase user motivation through tangible manipulation of objects and implicit physiological interaction, in the context of sound generation and control.

The work takes an original tack on the topic by combining explicit gestural interaction, via the tangible aspects, with implicit sensing of biosignals, thus forging an intriguing hybrid of multiple modalities.

In my view such combinations may very well be a hallmark of future, more enlightened approaches to interaction design—as opposed to slapping a touchscreen with “natural” gestures on any sorry old device we decide to churn out, and calling it a day.

(http://dx.doi.org/10.1145/2838732).