NYU Tandon School of Engineering

Thesis for the Master of Science Degree in Integrated Digital Media

Towards the Human-Centered Design of Mixed Reality Environments

The line is blurring between our physical and digital worlds. There is a new hybrid existence being created by the proliferation of displays in our real environment and the integration of the real world in virtual environments. Mixed reality (MR) is a continuum of environments between completely real environments and virtual ones [1]. MR poses new challenges for interactions in both our physical and virtual worlds. In MR environments, humans interact with real and virtual objects together in context. However the technologies that enable MR are designed for direct user interaction rather than considering the larger context within which a human interacts with a system. This creates a fragmented user experience and forces users to be split between real and virtual worlds. A design process focused on human needs in context contributes to more positive user experience outcomes and user acceptance of MR technologies.

How might we promote the human-centered design of mixed reality environments?

Background

Failures in software projects cost the US economy an estimated $25 to $75 billion annually, and many of the reasons for failure are preventable [2]. Delivery without adequate requirements from customers and users is a problem found in 73% of failed software projects [3] and can be avoided with UX research. UX research is necessary to understand users, whose needs change little over time while the features and technologies change rapidly [4]. UX is a major factor in whether or not people use a system. as illustrated in Davis’ technology acceptance model [5].

Today’s agile development methods prioritize shipping product over proper UCD process [6]. Lightweight tools for the UCD process, such as scenarios and personas, are effective ways to make it more agile and communicate a strategic UX vision with team members and stakeholders [7].

MR poses additional challenges as it requires advanced programming skills [8] and high-fidelity prototypes don’t always solicit helpful feedback [9]. There is currently a lack of structured approaches to MR design and trial and error is the best choice [10].

Comparative Analysis

I researched the available tools to understand user context as this is critical for MR environments.

Personas outline user needs, goals and tasks but do not provide details on context of use.

The Empathy Map [11] has been adapted for use in the design process [12] but is better suited for market research rather than user research.

The Activity Checklist [13] is a comprehensive list of user context considerations but is difficult to use and does not map to the design process.

The MUSiC method [14] provides a valuable framework for capturing the context of use but does not offer insight from the perspective of the user to inform UX.

Rapid Prototyping

I decided to try rapid prototyping in the context of use to explore new methods for user research in mixed reality environments. These low-fidelity prototypes focused on navigation as this continues to be a critical usability issue with web design and with wayfinding systems in physical spaces.

However, I found each prototype follows the assumptions of the specific technologies delivering the experience and it was difficult to generalize my findings across them.

Clockwise from top left: Neon duct tape, digital mockup, “Protorama” physical interface, mobile AR navigational aid.

Activity Theory in Practice

The existing method that is closest to capturing user context in a way that supports the design process for user experiences is activity theory. Activity theory posits that human activity is motivated by transforming an object into an outcome, which is mediated by tools and embedded in its context [15].

I designed a large-format Activity System printed as an 11x17 poster so it was large enough to use with familiar tools in the research and design process: post-its an sharpies. I decided the only way to see if it was effective was to test it in a case study with a real user in an MR environment.

Case Study 1: Smart Home Environment

I interviewed a smart home user who lives in a one-bedroom apartment with his cat. His motivation to use smart home technologies was to automate processes to adjust his environment to his preferences while he was at home and save energy while he was away. I created a scenario around his arrival at home, when he would adjust the lighting and temperature through voice input and locomotion.

Analyzing this scenario in the Activity System gave me the perspective I was searching for. It provided a clear definition of the activity and the contextual factors involved. The Rules section provided rich UX insights in the scenario. As the head of the household, the user felt undermined when he had to repeat himself to the device and said “I need it to respect my authority”. Such rules have direct implications on possible design enhancements for the Amazon Echo. Some design solutions could be to increase the radius and volume to which it responds to voice input, accommodate a wider vocal range and various pronunciations, and have it respond in a way that is more deferential to the user’s authority.

I found there to be some redundancy in the findings under “Division of Labor” and “Community” as the unit of analysis in this framework is much larger than the scenario I was using it for. I also found that contextual factors in the physical space were overlooked. I tested out replacing “Division of Labor” with “Environment” and found this change enabled me to include findings that were more relevant to the UX research and design process.

Testing 1: UX Researchers

I had the opportunity to test out my v2 prototype with UX researchers at the NYC UX/User Researcher Meetup. First I gave a talk on mixed reality environments and introduced my Activity System as a tool I have found useful in my research. Afterward 35 UX researchers split into groups for an interactive workshop using MR scenarios. Printed copies of my Activity System were on the tables for the groups’ use. Many groups ignored it while they discussed technical constraints and research methods they were more familiar with. Finally, each team shared their research approaches and there was consensus that MR environments were very different from their current work.

Out of 35 users the majority did not actively engage with the system during their team research planning. However after the workshop, 5 people I will call “super users” took copies of the system with them and it really resonated with them. What these users had in common was some background in order to understand the system’s value - either they were familiar with the theoretical side or they had experience in the MR space.

The Activity System v2

User testing provided some key findings to consider in improving my prototype. First, the design of the document was not approachable or easy to understand. Second, the system requires more thorough explanation/onboarding before use. Third, it appeared that the system was more suitable for the synthesis phase rather than at the outset of a project. The scenarios had sparse information on the user and researchers would need more information in order to find the Activity System useful.

The Context Map v1

Activity theory provided a good foundation for analyzing context. However I needed to adapt it to understand a user’s context in their interaction with a system incorporating digital and physical components. I designed the Concept Map as a worksheet to easily fill in research findings. I represented the user with a full body to reinforce that consciousness is intertwined with human activity in context. Interaction with these systems is not an isolated cerebral process but can happen with inputs such as gaze, voice, gesture, locomotion, etc.

Case Study 2: Mixed Reality Head-Mounted Display

For my MR HMD case study, my user was a graduate student at NYU Tandon School of Engineering using the Microsoft HoloLens in a classroom at MAGNET. In the interview she described her research on virtual avatars and the importance of being able to see and interact with them in 3D space. I created a scenario around her workflow in which she would record a POV video of her interaction with a 3D hologram in space to provide feedback to collaborators or design managers, hologram, placed it on a desk, resized it and was then able to inspect it from various angles.

As an observer I could not see the holograms she was interacting with. It was critical to get her POV video not only for the scenario but also for documentation. While the task interacting with a hologram was straightforward, recording a POV video proved to be the most challenging part of the scenario. The user had to already be in the Holograms app and start recording by voice command, “Hey Cortana, record a video.” Cortana did not always recognize the voice command and would pull up search results in a web browser. The system provided little feedback on its status. This case study reflected some of the key concerns with sensing systems raised by Bellotti [16]. We were unsure of whether it recognized the command, was processing the request, or recording the video.

Analyzing the MR HMD case study with the Context Map was easier with the updated language. Again, the Rules section provided valuable UX insights that could inform future design enhancements. These implicit rules included pronouncing English a certain way to be recognized by the system, staying within the spatially mapped area, and having the consent to take video from others in the room. Improvements to the software and hardware could include support for a wider range of English pronunciation and improved feedback when recording video.

Testing 2: Mixed Reality UX Designers

I tested the Context Map with 19 students in the graduate level UX course at NYU Tandon School of Engineering. The class was midway through an MR design challenge partnered with Microsoft HoloLens acting as their client and familiar with the unique challenges in designing MR user experiences. Isked them to try using the Context Map with a HoloLens scenario they were already familiar with in class. After the exercise we had a group discussion on their experience using the Context Map.

The majority of users said they could see themselves using the Context Map for future projects. Some students said they wished their team had this tool at the beginning of their client project in order to make sure they considered all the contextual factors in the MR experience they designed. They had encountered problems with their design working with the user’s physical space and believed the Context Map would have guided them to design around this earlier in the process.They wanted to see more case studies using the Context Map to see it in action and better understand how to use it.

User testing provided some key findings for improvements to the Context Map. First, the Context Map was intimidating to new users. It was not clear how to translate the information provided in the scenario to the Context Map format. Second, the Context Map required more thorough onboarding/education before use. Many suggested providing more examples of the Context Map in action to see what types of information fit where. Third, the Context Map proved to be useful for the research and planning phases of a project. The students’ experience with designing MR experiences enabled them to recognize the value of this tool in the design process.

My iteration on the Context Map reconceptualized the original triangular structure inherited from the activity system. I restructured the hierarchy of contextual factors relative to the needs for the user experience. At the base of the pyramid is Tools, which are system components involved in the activity. Tools represent the minimum system requirements for the interaction to take place, i.e. hardware, software, wi-fi, etc. The next level is Physical Space, the objects in the space as well as the natural and built environment. This includes some factors required for interaction to take place such as sufficient visible light for holograms to display and defined spaces for spatial tracking, but is also an area to take note of surfaces and furniture, the relationships of connected devices to each other, and other affordances and constraints in the user’s physical environment. The next level of the hierarchy is People, which includes stakeholders and community members. Identification of the relevant network of stakeholders and their relationships to the system and each other is an important step in requirements gathering [17]. Included are parties involved in the background such as hardware and software providers, third-party data buyers and advertisers. The apex of the hierarchy is User Needs such as preferences and social norms. The previous title, Rules, caused confusion for users and the findings usually pointed to implicit user needs and pain points.

Once I established the relationships of the contextual factors to each other, I sketched out ways to visualize them while focusing on the user journey. I looked at diagrams of ecosystems for inspiration on how to map out relationships in complex interdependent systems. The resulting diagram of user interaction in the context of use provides a framework for describing the context in which a human interacts with a system. I separated User Input from the Tools category and placed it next to the figure of the user, interfacing with the Touchpoint of the system and pointing toward the user’s Goal. Emanating from the user are rings of contextual factors in hierarchical order of their impact on the user’s experience.

The Context Map v2

Next, I created a new version of the Context Map. The 8.5x11 worksheet format is more approachable and easier to fill out multiple worksheets for various contexts throughout a user journey. The focus is on the user journey by featuring the User Needs, Touchpoint, and Goal at the top. The hierarchy of contextual factors and descriptions are in the bottom-right corner of the worksheet to guide the user.

I tested it with my MR HMD case study. The 8.5x11 format made the tool more approachable and I could easily complete another one if it needed revision.

Testing 3: MR UX Designers

I returned to the graduate-level UX course at NYU Tandon School of Engineering to test v2 of the Context Map. I asked them to analyze the same motorcycle designer scenario from the last round of testing. Once students completed the exercise they completed a 9-question survey designed to evaluate the Context Map and solicit student feedback. Questions 1-5 measured student comprehension and perceived usefulness of the Context Map. Questions 6-8 were open response items about strengths and weaknesses of the Context Map.

The survey results show that students had a positive experience using the Context Map. While v1 was challenging for some students to translate the information in the scenario to the worksheet, v2 was easier to understand. 75% of respondents agreed or strongly agreed with the statement “I understood how to use the Context Map worksheet for the scenario.” The Context Map was deemed useful in the design process by 69% of students. Overall, 62% of students reported that they would use the Context Map in their projects.

Answers to the open response items suggested changes to the terminology and examples for each category would improve comprehension. Many students wrote about how they did not understand the involvement of other people at all. One student wrote, “I don't understand how/why the stakeholders and community fit into what the user is doing. The stakeholders are not the ones using the app, the user is. So why are they included in the functionality?” These statements shed light on students’ mental model of direct one-to-one interaction between a user and system. Further research is needed to demonstrate that the new range of input modalities in MR environments, regardless of the technologies enabling them, impact the people and environment in the context of use.

The survey results on the students’ backgrounds were illuminating. Most students identified their background was in Design or both Design and Engineering, and positive responses correlated with these respondents. Negative responses correlated with those who identified their background was in Engineering. This finding points to a gap in reaching this particular audience. Further effort is needed to understand the engineering mental model and reasons why they disagreed with the terminology, logic, or visual representation of the Context Map.

Conclusion

In this thesis I researched and designed a method for HCD in MR environments. In MR environments, humans interact with real and virtual objects together in context. However, my research findings demonstrated that no existing methodologies effectively capture the user needs, people, space, and tools in a user’s context to inform the design of MR environments. I developed a framework for describing the context within which a user interacts with a system. This framework proposes a hierarchy of contextual factors as they relate to the user experience.

My research informed the development of the Context Map, a lightweight tool for use in the design process. As demonstrated, the Context Map is an effective tool for understanding user context in the design of MR environments. This approach has been validated in two case studies in different MR environments. The tool can be applied in various MR projects regardless of the enabling technologies. The Context Map has proven value in the research synthesis, planning, and design phases of a project. It is hoped that use of this tool will improve UX outcomes which impact user acceptance of MR technologies and software project failures. Further, its use can aid in the design of more positive and holistic human interactions in MR environments. Arguably these everyday activities embedded in context are the site of our consciousness.

Further validation is needed for my framework and hierarchy of contextual factors. Input from both research and practice would identify contextual factors that are missing or need reinterpretation, and their hierarchical order as they relate to the user experience. Further research is needed on users of the Context Map. As the design of MR environments is an interdisciplinary endeavor, it is hoped that this tool is useful and accessible to people from a variety of disciplines and industries. The intended users of the Context Map are people involved in the design process including researchers, designers, engineers, and stakeholders. The tool has potential value in the design of software, hardware, services, immersive experiences, and architecture. Additional applications of the framework and the Context Map are needed to demonstrate their value in various project types and phases of the design process.

Further Reading

Further information is available on my Thesis Process Site and Video Documentation.

References

    [1]. Milgram, Paul, and Fumio Kishino. "A taxonomy of mixed reality visual displays.”IEICE TRANSACTIONS on Information and Systems 77.12 (1994): 1321-1329.
    [2]. Charette, R. n. "Why Software Fails." IEEE Spectrum 42.9 (2005): 36-43. Web. Oct 23, 2016.
    [3]. Cerpa, Narciso, and June M. Verner. "Why did your project fail?" Communications of the ACM 52.12 (2009): 130-134.
    [4]. Beyer, Hugh, Karen Holtzblatt, and Lisa Baker. "An agile customer-centered method: rapid contextual design." Conference on Extreme Programming and Agile Methods. Springer Berlin Heidelberg, 2004.
    [5]. Davis, Fred D. "User acceptance of information technology; system characteristics, user perceptions and behavioral impacts.” International Journal of Man—Machine Studies 38 (1993): 475-487.
    [6]. McInerney, Paul, and Frank Maurer. "UCD in agile projects: dream team or odd couple?” Interactions 12.6 (2005): 19-23.
    [7]. Kollmann, Johanna, Helen Sharp, and Ann Blandford. "The importance of identity and vision to user experience designers on agile projects." Agile Conference AGILE’09, IEEE, 2009.
    [8]. Abawi, Daniel F., et al. "Efficient mixed reality application development." 1st European Conference on Visual Media Production (CVMP). 2004.
    [9]. Rettig, Marc. "Prototyping for tiny fingers." Communications of the ACM 37.4 (1994): 21-27.
    [10]. Geiger, Christian, et al. "Rapid prototyping of mixed reality applications that entertain and inform." Entertainment Computing. Springer US, 2003. 479-486.
    [11]. Osterwalder, Alexander, and Yves Pigneur. Business Model Generation: a Handbook for Visionaries, Game Changers, and Challengers. John Wiley & Sons, 2010.
    [12]. Ferreira, Bruna, et al. "Designing Personas with Empathy Map." SEKE. 2015.
    [13]. Kaptelinin, Victor, Bonnie A. Nardi, and Catriona Macaulay. "The Activity Checklist: A Tool for Representing the “Space” of Context.” Interactions, vol 6, no. 4, 1994, pp. 27-39
    [14]. Bevan, Nigel, and Miles Macleod. "Usability measurement in context." Behaviour & Information Technology 13.1-2 (1994): 132-145.
    [15]. Kutti, Kari. "Activity Theory as a Potential Framework for Human-Computer Interaction Research.” Context and Consciousness. Activity Theory and Human-Computer Interaction, edited by Bonnie A. Nardi, MIT Press, 1996, pp. 17-44
    [16]. Bellotti, Victoria, et al. "Making sense of sensing systems: five questions for designers and researchers." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2002.
    [17]. Sharp, Helen, Anthony Finkelstein, and Galal Galal. "Stakeholder identification in the requirements engineering process." Database and Expert Systems Applications, Proceedings from the Tenth International Workshop on IEEE, 1999.