Call Sample Studies

A random sample of 50-100 live interactions with an existing IVR (usually a call center ASR and/or DTMF application) is digitally recorded.  Each session is then manually explored using sophisticated audio editing and measurement tools. As many as 35 behaviorally operationalized variables (i.e., time to task completion, prompt lengths, response latencies, errors, time outs or confusion states, hang ups, pound outs, percent task completion, emotional responses, etc.) are then mined from the sessions and recorded in a spreadsheet for statistical analysis.   Each call is also assigned a Customer Experience Index, a number between -5 and +5 which reflects the quality of the user’s interactive experience.  Deliverable is a report that includes central tendency statistics as to how users are actually using the system (which features are used and in what relative frequency), where users have problems with the system and suggestions on how the system might be improved.

Expert Reviews

Access numbers, PINs, etc. are obtained for an existing application or IVR and the system is repeatedly called and interactively explored.  Sessions are recorded for subsequent re-analysis.  Every path that is available (given the account information provided by the company) is exhaustively explored.  Prompts are evaluated on an individual basis as are error and no response messages.  Deliverable is a two-section report that includes all relevant findings from both a tactical and strategic perspective.  Specific tactical suggestions (anywhere from 40-150, depending on system size) are made regarding individual prompts, timers, etc.  A strategic section follows that discusses the merits of a more substantial intervention such as a complete redesign or migration from DTMF to speech.

Application Optimization and Tuning

A detailed investigation of user tendencies designed to discover opportunities to behaviorally tune and existing application.  Findings typically result in numerous strategies to improve usability, containment and task completion.

Dialog Design

Complete specification from scratch of a voice user interface.  Project begins with a kickoff meeting followed by extensive requirements gathering and documentation.  Requirements Document begins.  Working directing with customer team, project is defined and basic call flow is evolved.  Testing follows (see below), errors are corrected and retested as necessary.  When the call flow is stable, we move on to the Detailed Dialog Design (DDD) in which every task, navigational flow, prompt, timer, error action, etc. in the system is specified.  When complete and after customer review, the DDD is tested using a Wizard of Oz procedure.  Problem areas are identified and addressed accordingly.  Final phase is the completely tested DDD ready for implementation.

Dialog Assessment

10-12 subjects are asked to navigate a simulated skeletal version of the proposed call flow to determine whether the call flow makes sense to the users.  Sessions are digitally recorded and analyzed.  Inadequacies are identified, addressed and retested as needed.  Recommended prior to WOZ testing of entire DDD.

Voice User Interface Usability Testing

1) Wizard of Oz Testing: Experimental test subjects are recruited and randomly assigned a subset of all of the tasks that a system (as specified in a Detailed Design Document) supports to control for order effects.  User-system interactive sessions are digitally recorded and analyzed.  User problems and design inadequacies are identified, addressed.  Design can be retested as needed.  

2) Usability Studies: Similar to WOZ testing but conducted on existing IVRs.  Experimental test subjects are recruited and assigned tasks at random.  Their interaction sessions are digitally recorded and analyzed.   User problems and design inadequacies are identified, addressed.  Design is retested as needed.