Skip to main content
Student Experience

How can A.I. Transform Voice Language Analysis?

MGXD students collaborate with the Laboratory for Analytic Sciences (LAS) to prototype the future of interface design for AI-assisted search, intelligence analysis, and LLMs.

3 different Interface designs for scenario videos

Master of Graphic & Experience Design (MGXD) students collaborated with the Laboratory of Analytic Sciences (LAS) here at NC State to explore the potential of machine learning to assist in voice language analysis tasks.

Graduate students working together in a studio space

As AI/ML technologies becomes more prevalent in modern workflows, new conveniences emerge even as some cognitive burdens increase. The advent of automatic speech recognition and other AI/ML technologies, for example, have not only added more tasks to the language analysts’ workflow, but unleashed new possibilities for how to go about information retrieval and exploration. In this project MGXD students used human-centered research methods to prototype a unified environment for analysts that enables them to take full advantage of novel technologies while seamlessly pivoting between the various stages of their workflow.

Although the research team focused specifically on voice language analysis in the intelligence community for this project, there are many applications for this kind of interface across other domains: helping hearing impaired individuals understand the nuance of a conversation; facilitating better communication in a business or government policy meetings across cultures; helping academic researchers gather and analyze materials from a vast array of lectures and conversations; capturing and preserving the nuances of oral languages, etc. Machine learning is opening up powerful capabilities for engaging with human language in new ways to provide critical insight and enable action.

The core research questions of this project:

How might the design of an interface use the affordances of ML to enable voice language analysts to quickly produce reliable and robust intelligence that accurately conveys content, intent, and context?

Students were asked to:

  • Envision a unified interface environment
  • Prototype new AI-powered ways of searching & finding
  • Visualize patterns in the data
  • Focus on language analyst needs/desires to create an enjoyable user experience

This material is based upon work done, in whole or in part, in coordination with the Department of Defense (DoD). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the DoD and/or any agency or entity of the United States Government.

Final Prototypes

Scanner Persona. Designers: Diksha Bahirwani, Isha Parate, Kayla Rondinelli
Translator Persona. Designers: Ned Babbott, Kevin Ward
Quality Control Persona. Designers: Sasa Crkvenjas, Adam Noel

Human-Centered Design Process

Three female students and a mentor from LAS looking at a laptop as they work through a language analysis exercise
Personas: Scanner/Sloane: Identifies information or activity that answers all or part of an intelligence requirement and meets reporting thresholds. Translator/Cameron: Translates foreign language material into coherent English QCer/Ferris: conducts quality control review on summaries and translations, and the reports that reference them.
images of personas and scenarios written out and diagrammed
AS IS user journey mpas: Image of diagrammed user journey map
Ideation/What IF: students sketching around a table using "What If" questions as prompts
Benchmarking: images of lots of different types of data visualizations
Sketches: many early sketches for the project
Feature Sets: Image of post it notes in MIRO that cluster interface features together in logical groups
Critiques: Images of early work and students and analysts discussing
Hi-fi Prototypes & Scenario Videos

This story was originally published on the website of Professor of Graphic & Experience Design Helen Armstrong.