3rd and 4th iteration of prototype, HoloLens case study, user testing.
This week I…
After struggling with the theoretical implications of my changes to the Activity System in v2, I realized the purpose of my adaptation is to suit a specific purpose and took ownership of it. Activity theory provided a good foundation for analyzing context but I needed to adapt it to understand a user’s context in their interaction with a system incorporating digital and physical components. I redesigned it to have more space for a designer to fill in sections and placed an icon of a stick figure to represent the subject. I deliberately chose a full body representation to reinforce the point that consciousness is intertwined with human activity in context, and interaction with these systems is not an isolated cerebral process but can happen with inputs such as gaze, voice, gesture, locomotion, etc.
While v3 took full ownership of the Context Map separate from activity theory, v4 established it as a working tool for designers. I updated the branding, changed the colors to black and white so it can be easily printed, and added fields ‘Designed for’ and ‘Designed by’ to make it a worksheet ready to be used on projects.
For my HoloLens case study I chose a scenario that seemed to be a relatively simple task that collaborators or design managers might do in reviewing 3D models for a project. Tatiana was my lovely volunteer for the scenario where the user goal was to record a video of a 3D hologram in space with her feedback. From the Holograms app she selects a hologram, places it on a desk, resizes it and is then able to inspect it from various angles.
We discovered that while the task of placing and interacting with a hologram was straightforward, recording a POV video (and later trying to download it from the HoloLens) were another story. The Bloom gesture, which is used to access the Start menu, automatically stops the recording of video. She had to already be in the Holograms app and start recording by voice command, “Hey Cortana, record a video.” Cortana did not always recognize the voice command and would pull up search results in a web browser. Tatiana had to repeat herself several times trying to reduce her Brazilian accent so the system would recognize her command. However even after successful processing of the request, feedback on whether it was recording or not was confusing. A small white light illuminated in the front of the device apparently to indicate it was recording to others, but the UI confirming ‘Recording’ inside the HoloLens was at first difficult to find.
After recording the case study, downloading the POV video from the HoloLens proved to be an even more frustrating experience. From the Start menu I navigated to the Photos app, selected the videos and ‘Share’ but there are currently only two options: Facebook or Youtube. I only wanted the files intact to be able to edit them into shorter segments for my research presentation. The back way required the HoloLens connect to another Windows 10 computer and many thanks to Elton for his help and commentary on the UX with little things such as typing his long complicated password in the HoloLens. After selecting files to download there was no feedback if downloading was in progress, in fact the file previews disappeared for a few minutes leaving us to wonder if they were accidentally deleted, and fortunately a few minutes later video files were indeed downloaded to the other Windows 10 computer. Luckily the footage was worth it!</p>
Completing the Context Map for the HoloLens case study felt more natural using the v4 prototype. Even though I designed it myself, while using it the document felt more like a worksheet than a finished design deliverable, and updating the language made it clearer which pieces go where and how they relate to each other. I believe organizing all these findings from the case study in the Context Map creates a clear picture of the context within which Tatiana was operating and articulates everything required to enable this activity.
Thanks to Dana I had the opportunity to test out my v4 prototype with 19 students in the graduate-level UX course at NYU Tandon School of Engineering. First I gave a brief overview of the virtuality continuum and broad range of mixed reality environments, the need for considering a user’s context to incorporate digital and physical components, and my proposed solution: the Context Map. Since the class has a client project for Microsoft HoloLens, I walked them through my HoloLens case study and how the Context Map provides a great way to structure my findings that can inform future designs.
After the talk I handed out 11x17 prints of the Context Map to each student and asked them to try using the Context Map with a HoloLens scenario they are familiar with. The scenario is of a designer working at a desktop computer with a 3D CAD model of a motorcycle, then transferring it to a hologram and interacting with it in physical space. 8ninths HoloLens Design Patterns.
Out of 19 users the majority said they could see themselves using the Context Map for future projects. Those that did not agree said that they were confused about what to put where and needed to see more examples of the Context Map in action to really understand how to use it.
I met with Professor Anne-Laure Fayard to discuss my thesis project. Her perspective was really valuable given her cognitive science background and familiarity with activity theory, as well as her experience with the human-centered design process and design thinking tools. She highlighted the importance of making the Context Map accessible for designers and suggested I change the language to be less theoretical and more specific to the design process.
This week’s “Thesis Hack Day” with my Thesis Accountability Partners was spent mostly filming the HoloLens case study.