The NYU Ability Project partnered with Charter Spectrum and the NYC Media Lab to research universal design in home entertainment. My responsibilities as Research Lead include leading an interdisciplinary team in conducting user research, rapid prototyping and testing with individuals with disabilities to inform Charter’s accessibility strategy.
Charter/Spectrum provided our team the following goals:
From its inception, priority was given to establishing participation and eliciting feedback from community members with disabilities. Partnerships with several New York-based organizations were formed to recruit participants who have first-hand experience with access issues in home entertainment.
We researched media distributors, in particular their strategic partnerships with smart home technologies and acquisitions of artificial intelligence solutions to streamline metadata. We also researched applications for emerging technologies in assistive tech and overall trends in media consumption to understand the direction of accessibility and home entertainment.
Usability testing of Spectrum’s products was conducted on the TV set-top box, Roku, Xbox, iPad and website. Stations for each device were set up around the lab with at least one member of the Ability Team. After conducting product-specific task analyses, we identified user goals around searching for live TV and on-demand programming, watching video, and adjusting the accessibility settings to suit their needs. The goals informed testing tasks and scripts that were consistent across devices.
The first Spectrum testing session had a total of 9 participants. Before testing, participants were briefly interviewed about their demographic and ability-related information. Participants in this first round of testing were between the ages 28-64, and reported watching TV an average of 5 times a week, using a variety of methods (streaming online, using mobile apps, set-top boxes and digital media players). Participants spent approx 10 minutes at each station and were asked to ‘think aloud’ as they interacted with each system.
Based on these insights, the team built experimental prototypes that ranged from accessibility modifications to new concepts.
In our second testing session we had a total of 11 participants and their ages ranged from 30-66. Participants reported a range of disabilities including blindness, low-vision, deafness, cognitive and mobility issues. Each user reported watching television an average of 6 times a week using set-top boxes, Rokus, mobile apps and other streaming sources
The Graphical user interface (GUI) prototype was built with accessible design enhancements for the iPad and laptop. The design incorporated WCAG 2.0 standards for 18pt font or 14pt bold text. The prototype featured a splash page upon launch for users to orient themselves, a global navigation with a prominent Search feature, and a universal access icon for easy access to accessibility settings. Users did not experience difficulty completing tasks during testing.
A screen reader compatible prototype was developed in HTML for the iPad and laptop intended for screen reader users. The design focused on the screen reader user pain points revealed in usability testing: difficulty finding the search feature, improper labelling of page elements, and inconsistent information architecture that made it difficult or at times impossible to navigate. The screen reader prototype had a prominent Search feature that loaded the page with the search bar already selected so any typing is captured and immediately inputs text. The prototype was designed with only one global navigation included in the “head” of the HTML rather than the “main” body of the page to avoid repeating the menu when navigating to different pages. Title text was included for every navigable item on the page, including images, to enable screen reader users to search for specific items rather than browsing.
The HTML prototype optimized for screen reader compatibility was tested with three screen reader users on the iPad and laptop. This text-only prototype had a clearly structured information architecture that enabled users to complete all the tasks without difficulty during testing.
Participatory prototyping techniques were employed in reimagining the remote control at the “Build Your Own Remote” station. Users talked through their process while prototyping to facilitate co-creation between designers and users. There were strong existing expectations for the layout of the remote which guided each participant’s design and perceived ease of use. Users had a tendency to create two separate sections - a control panel and a direct input area.
Based on the findings from the participatory prototyping session, the team proposed concepts for short and long term. A remote control app features control panel and direct input areas and would be implementable in the near-term. A future concept out of malleable smart material would mold to the surface the user places it on to help it stay put during use, provide haptic feedback when input is received, is rechargeable and easy to find when lost.
At the onboarding station, participants were asked for their feedback on the onboarding experience prototype, chatbot feature, and friendly error messaging. Users expressed that they like onboarding that is quick, fun, easy, and informative. They preferred the onboarding experience to focus on elements in the UI/UX that are not necessarily the most obvious features. All users wanted the option to dismiss onboarding—no matter where in the UI/UX it is presented.
When asked which features the users think are important to include in the onboarding, the responses included: all accessibility features, especially screen reader features (most importantly), the position of buttons on the remote control (as this can “eliminate the need for additional onboarding in many cases”), the general settings, and the privacy settings. Importantly, the accessibility settings page should be organized by feature and not by disability since users with multiple disabilities or no disabilities adjust these settings.
Finding content with second audio programming (SAP) and video descriptions was very difficult for users. Our prototype proposed crowdsourcing video descriptions and using IBM Watson to filter and detect bias. Video playback delivered an extended video description (EVD) when complex visual information was presented in a scene.
Screen reader users were asked to watch an original Law and Order video clip by itself and then watch it with the EVD prototype. The action in the scene focuses on a dog running under a police tape and cannot be understood int he dialogue alone. After watching the first time, users said they felt “frustrated” and “that I’m missing a lot, I wish I knew what was going on”. Users had overwhelmingly positive responses about the quality of the descriptions and made comments like “that was so awesome. I wish I could take this home” and “I would have liked to watch the whole thing”.“Agent picks up dog behind crime scene tape.”
Gracenote metadata is provided to Charter as a package of XML files, the schema of which informs the structure of the product navigation UI across Charter’s product line, and the content of which is used to populate TV schedules, program summaries, allow for searching by, e.g. primary cast members, and many other useful features.
Our analysis revealed accessibility features (Closed Captioning and Descriptive Video Service, tagged as 'CC' and 'DVS', respectively) are identified within the broadcast schedule metadata ('schedules.xml') within a single field 'quals'. For comparison, the ratings / advisory metadata provided by Gracenote on a per-program basis is quite comprehensive, listing content descriptors such as ‘Graphic Language’ under a well-defined 'ratings' heading in the 'programs' file. While a remedy to this situation lies upstream from Charter, we endorsed that Charter, as Gracenote's customer, could request that the metadata provider reorganize and prioritize how it delivers accessibility information.