The Editor’s Spotlight, Part 2 — TOCHI Issue 23:4 — Adding Physical Objects to an Interactive Game Improves Learning and Enjoyment

IN THE SPOTLIGHT, Part 2:

Adding Physical Objects to an Interactive Game Improves Learning and Enjoyment

This delightful contribution explores EarthShake, a mixed-reality game that helps children learn some basic principles of physics by bridging the physical and virtual worlds via depth-sensing cameras.

The work includes not only an interactive prototype that is put to the test by 4-8 year old children (a particularly demanding user demographic if ever there was one!), but also through careful experimental design that teases out many insights illustrating how and why the use of three-dimensional (3D) physical objects in mixed-reality environments can produce better learning and enjoyment than flat-screen 2D interaction.

Computer technologies can be especially empowering when brought to bear in the context of the physical environment. This has long been suspected as a benefit of so-called “tangible interfaces”—that is, interfaces employing physical stand-ins or props as proxies for digital objects—yet precisely how, or why, or under what circumstances tangibles might bring benefits has remained murky, particularly when combined with mixed-reality environments, i.e. sensing systems that detect the 3D world and incorporate it directly into the interactive experience. One can hypothesize many possible reasons that tangibles could be beneficial to learners in mixed-reality environments:

Is it the three-dimensional nature of the objects?

Do the potential benefits derive from making interaction more enjoyable?

Or perhaps it is the embedding in reality, and the sensory cues that the real world affords, that forms the critical difference—as compared to watching videos of the same activities, for example.

In addressing these questions, the carefully controlled studies isolate various possible effects and confounds, and thereby convincingly demonstrate many aspects of exactly how these mixed-reality environments benefit learners. The results demonstrate that learning benefits accrue through embodied cognition, improved mental visualization (as evidenced by children’s hand gestures, for example), and via the mere observation of physical phenomena in the full richness of sensory cues available in the real world—cues that are inherently absent when watching a video recording of the same activity on a flat, two-dimensional screen.

 

Nesra Yannier, Scott Hudson, Eliane Wiese, Ken Koedinger. 2016. Adding Physical Objects to an Interactive Game Improves Learning and Enjoyment. ACM Trans. Comput.-Hum. Interact. 23, 4, Article 26 (August 2016), 33 pages.

DOI= http://dx.doi.org/10.1145/2934668

 

The Editor’s Spotlight, Part 1 — TOCHI Issue 23:4 — Rituals of Letting Go: An Embodiment Perspective on Disposal Practices Informed by Grief Therapy

TOCHI Issue 23:4 is now available on the ACM Digital Library.

This month’s TOCHI has an unusually rich and far-ranging set of contributions, some of which forced me to confront deep personal truths (more about that shortly, in this post).

And while there were several articles that piqued my curiosity, two in particular caught my editorial eye. The first of these is featured below, the second will appear shortly in a follow-up post.

 

IN THE SPOTLIGHT, Part 1:

Rituals of Letting Go: An Embodiment Perspective on Disposal Practices Informed by Grief Therapy

This article offers a great example of the rich insights that can be unpacked by a thorough qualitative analysis of an HCI design context—in this case, the challenges of loss and grief that we all must eventually confront, and which therefore may be the essence of the human condition itself.

This unique problem takes on some strange twists in this modern era, when many of the “possessions” representative of our loved ones who have passed on assume an online and digital, rather than physical, form. How can one confront such an overwhelming task—going through thousands of digital photos, or blog posts, or a Facebook timeline which may not even be under your direct control—in such circumstances?

Furthermore, one might naturally assume that one always wants to retain such digital possessions, whereas the reality is much more complicated. Indeed, to move on, what many people need is in fact a therapeutic way of letting go—an end-goal spectacularly ill-suited to the inflexible, binary, and non-embodied methods that computers and web services currently offer us for deleting digital objects (or massive collections thereof).

And I have to admit, this article really hit home because it represents a very deep, dark hole that I have fallen into myself: Kerrie, my first spouse, died at the tender age of 29, just as I was embarking on my career at Microsoft Research. As I tried to put my life back together, one problem I had to confront was what to do with my wife’s greeting, which may have been the only recording I had of Kerrie’s voice, on our voice mail. While I will leave the solution that I came up with to the reader’s imagination, I can assure you that hitting some Delete button is about as far as you can get from a satisfactory solution to such a dilemma, and indeed there are no easy answers.

Because people flattened by such events (which the authors astutely expand to encompass related circumstances such as stillbirth, separation and divorce, as well as death itself) are in no condition to participate in some focus group or contextual inquiry, the article takes the clever indirection of working with professional grief therapists, all of whom helped clients to prepare “rituals of letting go” so as to move on with their lives—a new life that by necessity could no longer could include their loved one.

And indeed, the problems faced and the type of rituals enacted depend strongly on such circumstances, leading to a vocabulary of action and intent that the authors characterize in a rich design space. The work also suggests many new design directions and possibilities for HCI and sustainable, full life-cycle design to help people divest themselves of emotionally charged digital possessions.

To riff on the novel direction of release-centric interactions suggested by the article, imagine, for example, a digital-photo locket explicitly designed for letting-go such that each time you choose to open it, it displays a photo (or a voice message…) of a loved one for the very last time; when you decide to close it, an embodied act, the echoes of the emotionally-charged digital artifact would drift away on a chill wind and be gone forever, allowing the survivor—if only symbolically, and in a small way—to move on.

The article is rich with provocative examples, situations, and design questions of this sort, and reading it may very well forever change how you think about the design of photo repositories, voice messages, texts, and other such digital possessions.

 

Corina Sas, Steve Whittaker, and John Zimmerman. 2016. Rituals of Letting Go: An Embodiment Perspective on Disposal Practices Informed by Grief Therapy. ACM Trans. Comput.-Hum. Interact. 23, 4, Article 21  (August 2016), 37 pages.

DOI= http://dx.doi.org/10.1145/2926714

 

The Editor’s Spotlight, Part 2 — TOCHI Issue 23:3 — Mobile Phones as Amplifiers of Social Inequality among Rural Kenyan Women

IN THE SPOTLIGHT, Part 2:

Mobile Phones as Amplifiers of Social Inequality among Rural Kenyan Women

This short but extremely incisive article offers a remarkable shot across the bow of (at times overly) optimistic technologists, such as myself, who typically operate under the worldview—(which if we are being charitable amounts to an unquestioned assumption; or if less so, then nothing but an unsupportable myth)—that the technologies we work so hard to create are always positive forces for change in the world.

Yet in this case, as the authors of this article so meticulously document, the mobile phone itself can in fact serve as a massive amplifier of injustice, and impoverishment, and other social inequalities that are prevalent in many (and especially in the more rural) corners of the globe.

Aspects of this perspective will perhaps come as no surprise to those working in the Information and Communication Technologies for Development (ICTD) sub-discipline of our field. Such insights are presaged by some of Toyama’s work, for example, who pointedly noted that technology “tends to amplify existing social inequalities”—a law of amplification driven by the unequal motivations and capabilities (as forms the focus in this particular article) between rural Kenyan women and the powerful corporations that control the mobile networks in the country, and design the services (often in ways that glaringly elevate their own interests above those of their impoverished customers).

And it is in the unpacking, and illustration, and spelling-out of the insidious technological challenge of addressing these differential motivations and capabilities that this TOCHI paper shines.

The authors report in considerable depth on a series of field studies which were undertaken in rural Kenya—challenging studies which, by their very nature, are not ‘controlled’ or ‘repeatable’—yet are rich with ethnographic detail and design insights nonetheless.

This, in my view, is a must-read TOCHI article that can, and should, give us all pause as to the advisability of some (or perhaps many, or even most) of the interventions that our technological fancies would lead us to undertake.

What exactly to do about this is a very difficult problem, but without first surfacing such challenges and making them apparent, we cannot even take the first steps towards designing a better world for all persons—and particularly for the under-represented and the marginalized among us, as opposed to the highly profitable (and at times seemingly unscrupulous) corporations that would so readily take advantage of people through carrier lock-in and other such questionable practices.

 

Susan Wyche, Nightingale Simiyu, and Martha Othieno. 2016. Mobile Phones as Amplifiers of Social Inequality among Rural Kenyan Women. ACM Trans. Comput.-Hum. Interact. 23, 3, Article 14  (June 2016), 19 pages.

DOI= http://dx.doi.org/10.1145/2911982

 

The Editor’s Spotlight, Part 1 — TOCHI Issue 23:3 — Predicting Team Performance from Thin Slices of Conflict

TOCHI Issue 23:3 is now fully available on the ACM Digital Library.

And in my editorial remarks for this month, I felt compelled, once again, to Spotlight two key contributions in this latest and greatest issue, the first of which is as follows:

 

IN THE SPOTLIGHT, Part 1:

Predicting Team Performance From Thin Slices Of Conflict

The balance of positive to negative affect during episodes of marital conflict has been found to be highly indicative—even years in advance—of functional marriages (as opposed to dysfunctional ones). This is a well-established result.

Indeed, the finding has been extended to dyads engaged in negotiation, or in pair programming, for example.

But it has remained unclear if the significance of affect applies to groups more generally.

Or even under the reasonable presumption that it probably does, this has still left unresolved the tricky question of how to study it, and how to elicit ‘thin-slices’ of conflict (e.g. a frank, 15-minute discussion of difficulties plaguing a team project) in a practical manner that is amenable to further analysis, scientific and otherwise.

Thanks to the pioneering efforts of this TOCHI article—including a novel methodology for the elicitation of conflict from small groups—the predictive power of an overabundance of negative affect (contempt, criticism, defensiveness, etc.) relative to positive expressions (interest, humor, validation, and so forth) has been convincingly demonstrated, for the first time, as highly predictive in terms of the long-term success of teams (of up to 4 individuals) engaged in design activities.

While the slices of conflict are thin, the analysis (and insights thus derived) are deep, and indeed were highly predictive of the teams’ success up to 6 months in advance.

The author presents two in-depth studies, the first of which had the participants self-assess their affect by watching a recording of their own conflict session and setting a dial to indicate their real-time feelings (a continuous value from very negative, to neutral, to very positive).

The second study followed up the first with an objective measure of affect, derived from extremely thorough video analysis of each individual’s affect (including detailed capture of all utterances, and facial expressions, and body language). While a smaller sample, coupled with the first study the general pattern of findings is convincing.

The potential applications of this work, its methodology, and its findings are many.

To cite just one example, the author notes that, broadly speaking, the design of groupware and CSCW applications have tended to focus on the support of task-oriented processes—as opposed to the socio-emotional processes of the team.

This may be a critical mistake.

While some baseline of support for the group’s actual tasks and work is (of course) necessary (as articulated by the coordination theory of Malone & Crowston, for example), the findings of this new TOCHI study argue strongly that it is the coordination of affect, as opposed to that of the tasks, that is the key defining characteristic of success in team endeavors.

 

Malte Jung. 2016. Coupling Interactions and Performance: Predicting Team Performance from Thin Slices of Conflict. ACM Trans. Comput.-Hum. Interact. 23, 3, Article 18 (June 2016), 36 pages.

DOI= http://dx.doi.org/10.1145/2753767

The Editor’s Spotlight, Part 2 — TOCHI Issue 23:2 — Accessible Play In Everyday Spaces: Mixed Reality Gaming For Adult Powered Wheelchair Users

Without further ado, here is the second of the two articles in issue 23:2 of TOCHI that delves into the issues and challenges raised by mixed-reality spaces — again, from a unique perspective.

 

IN THE SPOTLIGHT, Part 2:

Accessible Play In Everyday Spaces: Mixed Reality Gaming For Adult Powered Wheelchair Users

One of the things that’s all too easy to forget in the excitement about location-sensing and ubiquitous computing is that the mobility of the user is taken for granted.

But for many individuals, simply getting around can be a huge challenge, and the continual status of diverse end-users as an afterthought in design is an unpleasant truth that requires all of us would-be interaction designers to take a very hard look in the mirror indeed.

Something most people don’t know about me is that my first wife died at the age of 29. For about the last six months of her life, she was largely confined to a wheelchair and needed oxygen everywhere she went. Yet she was vivacious and extremely bright, and had just finished her master’s degree. While I was on travel she went on a job interview. She arrived only to discover that from the lobby, a grand staircase led to her interviews on the second floor. The building was in an office park with no elevators.

I still remember vividly how she described that staircase, looming before her like an immense cliff.

Thus I was very happy to see this article run through the gauntlet of the rigorous TOCHI peer-review process and come out the other end as a wonderful contribution that is the first to address the social entertainment needs of adult powered chair users in a social and mobile game setting, namely a mixed reality implementation of capture-the-flag.

The article contains a number of perspectives and insights that really make one stop and take notice. For example, a strong theme that emerged was the desire not only for accessible entertainment, but also inclusive play with non-powered chair users, such as friends and family. The power of the activity to arouse the curiosity of bystanders and make them want to participate, as well, was also noted.

The purposeful moving-about engendered by the game was very freeing for the participants, but what perhaps most struck me in the entire article was a comment from the mother of one participant. While thrilled to see her daughter enjoying herself and engaging with others on this occasion, the mother reported that otherwise her daughter “mostly stays at home by herself.”

Perhaps this article can be the first small step towards righting this injustice.

The article concludes with an informative set of theoretically- and empirically-informed guidelines for includifying (or making inclusive) games originally designed for people without disabilities, through the use of technological augmentations such as mixed reality. And although there is obviously still a very long way to go in these directions, it was heartening to see some concrete progress in the form of this TOCHI contribution.

 

Katie Seaborn, Jamal Edey, Gregory Dolinar, Margot Whitfield, Paula Gardner, Carmen Branje, and Deborah Fels. 2016. Accessible Play in Everyday Spaces: Mixed Reality Gaming for Adult Powered Chair Users. ACM Trans. Comput.-Hum. Interact. 23, 2, Article 6 (April 2016), 28 pages.
DOI= http://dx.doi.org/10.1145/2893182

 

The Editor’s Spotlight, Part 1 — TOCHI Issue 23:2 — Lions, Impala, and Bigraphs—A Unique Perspective on Modeling Physical / Virtual Spaces

It was a difficult choice deciding which article to spotlight this month. As such, I decided to feature two articles most prominently in my editorial for Issue 23:2, the first of which forms the subject matter of this post.

But as it so happens, both of the articles I spotlighted this month consider the issues and challenges raised by mixed-reality spaces from unique perspectives.

And both take an inter-disciplinary tack on the difficult problems raised by this intriguing class of ubiquitous computing systems.

Nonetheless, as GPS and other sensors indeed live up to the name of this sub-field and become truly ubiquitous, it speaks to experiences (and usability problems) that we have all already likely encountered in one form or another as we fumble about in an unfamiliar city with our smartphones in hand.

Or that we will all likely encounter in the future, as our physical abilities inevitably change or diminish with age.

 

IN THE SPOTLIGHT, Part 1:

Lions, Impala, and Bigraphs—A Unique Perspective on Modeling Physical / Virtual Spaces

They say the three most important things in real estate are location, location, and location—and so it seems with ubiquitous computing systems, and contextual interactions in general.

Yet what is less often recognized is that ‘location’ is, in fact, a social construct every bit as much as it is physical property of the world—and which furthermore can only be sensed through particular technologies that have their own quirks. As the authors of this article make apparent, the result is a many-faceted terrain that offers shifting perspectives as one considers it from the point of view of human, technology, computational representation, and the physical landscape itself.

This article focuses on a particular mixed-reality game that features schoolchildren using handheld computers to join together into small prides of lions and launch attacks on (purely virtual) impalas that had to be discovered by exploring the physical environment. Although extremely simple in conception, difficulties encountered in the realization of this game highlight the devious problems and complexities that arise in many classes of ubiquitous computing systems.

For example, due to noise and the limits of precision, a sensing technology (such as GPS) may interpret a small huddle of schoolchildren as occupying distinct physical areas, even though from the human perspective they are all clearly co-located, and engaged in a common activity, as dictated by the social grammar of proxemics and f-formations (to borrow two constructs from sociology that characterize how people tend to share physical space).

Such problems are well-known in sensing systems, and a great deal of debate has gone back and forth about how to anticipate and design interactive experiences around  these foibles of the technologies at our disposal.

But this paper takes the unique step of recognizing that these perspectives can be codified, through the mathematical formalism of bigraphs. A series of simple production rules—which furthermore afford an intuitive diagrammatic representation—can then model the comings and goings of people, devices, and computational representations on the physical landscape. The result is a set of rules that allows one to model and formally reason about subtle mismatches between the human and technological perspectives.

While the article does not claim to offer a formal grammar of proxemics, the work certainly hints that such a direction may be possible. With the “Internet of Things” colliding at an ever-accelerating pace with the long-established “Social Expectations of Humans,” the tools and insights offered by this ambitious article may comprise a critical lens through which to reason about (if not reconcile) the critical design mismatches that inevitably arise between them.

 

Steve Benford, Muffy Calder, Tom Rodden, and Michele Sevegnani. 2016. On Lions, Impala, and Bigraphs: Modelling Interactions in Physical/Virtual Spaces. ACM Trans. Comput.-Hum. Interact. 23, 2, Article 3 (April 2016), 57 pages.
DOI= http://dx.doi.org/10.1145/2882784

 

 

 

 

The Editor’s Spotlight: Navigating Giga-pixel Images in Digital Pathology

For the first article to highlight in the freshly-conceived Editor’s Spotlight, from TOCHI Issue 23:1 I selected a piece of work that strongly reminded me of the context of some of my own graduate research, which took place embedded in a neurosurgery department. In my case, our research team (consisting of both physicians and computer scientists) sought to improve the care of patients who were often referred to the university hospital with debilitating neurological conditions and extremely grave diagnoses.

When really strong human-computer interaction research collides with real-world problems like this, in my experience compelling clinical impact and rigorous research results are always hard-won but in the end they are well worth the above-and-beyond efforts required to make such interdisciplinary collaborations fly.

And the following TOCHI Editor’s Spotlight paper, in my opinion, is an outstanding example of such a contribution.

IN THE SPOTLIGHT:

Navigating Giga-pixel Images in Digital Pathology

The diagnosis of cancer is serious business, yet in routine clinical practice pathologists still work on microscopes, with physical slides, because digital pathology runs up against many barriers—not the least of which are the navigational challenges raised by panning and zooming through huge (and I mean huge) image datasets on the order of multiple gigapixels. And that’s just for a single slide.

Few illustrations grace the article, but those that do—

They stop the reader cold.

Extract from a GI biopsy, showing malignant tissue at 400x magnification. (Fig. 3)

The ruddy and well-formed cells of healthy tissue from a GI biopsy slowly give way to an ill-defined frontier of pathology, an ever-expanding redoubt for the malignant tissue lurking deep within. One cannot help but be struck by the subtext that these images represent the lives of patients that face a dire health crisis.

Only by finding, comparing, and contrasting this tissue to other cross-sections and slides—scanned at 400x magnification and a startling 100,000 dots per inch—can the pathologist arrive at a correct and accurate diagnosis as to the type and extent of the malignancy.

This article stands out because it puts into practice—and challenges—accepted design principles for the navigation of such gigapixel images, against the backdrop of real work by medical experts.

These are not laboratory studies that strive for some artificial measure of “ecological validity”—no, here the analyses take place in the context of the real work of pathologists (using archival cases) and yet the experimental evaluations are still rigorous and insightful. There is absolutely no question of validity and the stakes are clearly very high.

While the article focuses on digital pathology, the insights and perspectives it raises (not to mention the interesting image navigation and comparison tasks motivated by clinical needs) should inform, direct, and inspire many other efforts to improve interfaces for navigation through large visualizations and scientific data-sets.

 


Roy Ruddle, Thomas Rhys, Rebecca Randell, Phil Quirke, and Darren Treanor. 2016. The Design and Evaluation of Interfaces for Navigating Gigapixel Images in Digital Pathology. ACM Trans. Comput.-Hum. Interact. 23, 1, Article 5 (February 2015), 29 pages. DOI= http://dx.doi.org/10.1145/2834117

 

 

 

 

Introducing “The Editor’s Spotlight”

In a new feature, as the Editor-in-Chief I will offer up some thoughts on select papers as they appear in the pages of TOCHI (or to be more precise, as they grace the ACM’s digital library, given our desire to turn-around accepted manuscripts to the research community as quickly as possible—not to mention the electronic-first nature of publishing these days). And in addition, I will always strive to give an overview of all the content in each issue, to the extent possible.

But before I unshutter the brilliant beacon for the first time, with Issue 23:1 as its deserving focus, let me briefly set the context:

The purpose of these spotlight editorials is to help frame the contributions of the research that we publish in the wider context of the field.

As well as to direct attention to articles that may be of especial interest.

That, of course, serves not only our readers but also our authors—all of them—because by implication, bringing attention to our great content raises the profile of the entire journal.

By highlighting certain articles my intent is not to suggest that others are not worthy of your attention. Far from it. Every article we publish has received exquisite attention from our Editorial Board, so the TOCHI brand in and of itself tells you that the content is always absolutely sterling.

Hence these are not critical reviews or critiques. These articles have already passed the gauntlet of rigorous peer review, and so my purpose here is to help guide our readers as to the nature and importance of the contributions we publish.

As such, my hope is that both newcomers to the field of human-computer interaction (who may be missing some of the implicit framing and motivation that underlies many papers) as well as seasoned practitioners and students of HCI (who may be quickly scanning the journal’s contents to see what catches their eye) can benefit from these remarks and reflections.

As well, astute authors-to-be can perhaps gain a few insights as to what level of contribution is necessary to pass muster at the journal—not to mention the ways of conveying one’s results that tend to best resonate with TOCHI’s reviewers and our Editorial Board.

To fully absorb and appreciate both the strengths and limitations of each article’s scientific contributions one must read them in detail, of course, as I hope you will be moved to do when one of these catches your eye—and as they originally did my own.

Just follow the “DOI” link immediately after each paper to view it directly in the ACM Digital Library.

You can be the first to see these commentaries on the TOCHI News page (http://tochi.acm.org/news), which I urge you to follow. Please do help spread the word for those TOCHI articles that pique your interest.

And of course, all of your individual downloads, subscriptions, and citations are the loose change in the treasury of the journal’s impact.

But they compound over time and slowly accumulate great intellectual riches.

 


The first Editor’s Spotlight will follow this post shortly. Stay tuned. We will also issue Article Alerts for all of our other great content.