“Sketchpad, A Man-Machine Graphical Communication System”—a TV show made about the software Ivan Sutherland developed—is very revealing of the misconstrued notion and usage of the term “conversation” today. The way in which we use the word conversation when we are talking about AI and chatbots somehow feels wrong. Watching the Sketchpad Demo made me question when and why we began to frame our interactions with AI as conversational.
In “The Ultimate Display” Ivan Sutherland says, “The Sketchpad system makes it possible for a man and a computer to converse rapidly through the medium of line drawings.” Yes, we are communicating to the computer, but are we conversing? The way I see it is that we are delivering more commands rather than having a conversation—which to me means dialogue. Even though huge strides have been made in this field, there are still many limitations in the conversational capabilities of AI. These limitations further propel us to bark orders and use one word responses in order to not confuse chat agents and instantly receive responses. Could one of the consequences of AI limitations include stimulating a more common language of commands or directives?
9 Design Ideas that shaped the Web mentions Professor Scott Fahlman who was involved in the first uses of emoticons. Fahlman said, “I had no idea that I was starting something that would soon pollute all the world’s communication channels.” Similar to the use and substitution of emoticons for language, could our conversations with AI develop new metaphors and impact the way in which we communicate? Even while watching the Sketchpad Demo, I noticed that new metaphors for that time such as “several pieces of paper” or “calling up copies of master pictures” relates to the metaphors and language we are using in platforms like Adobe InDesign.
I feel inclined to also bring up Sherri Tukle’s thoughts from “Reclaiming Conversation.” She asks “What do we become when we talk to machines? What do we forget when we talk to machines?” To add to her questions, what happens when even language becomes extremely convenient to access? Google came out with Google’s Pixel Buds last year which are language translating headphones. The concept behind Google Pixel Buds is to allow for a smooth translation from one language to another and expand communication among many different groups of people. Something like Pixel Buds can be extremely helpful, but if we begin depending on it (as we rely on GPS and other systems), there is a loss in the experience of learning and knowing. When you don’t learn the language by actually speaking and understanding it, everything exists as surface level utterances without historical or cultural context. Could AI become a repository for surface level understandings of our language? Will we end up forgetting less globally used languages when there is no longer a necessity to learn them? What happens to peoples roots which are closely tied to the nuances of language?
- To my point on commands and directive–>In the USA Today article on pixel buds, the author says “summoning a Google Assistant.” What would happen if we were to work with AI rather than just command it?
- What if we begin to view AI as creative partners or collaborators rather than solo intelligence machines?
- What then happens to human-centered design? Will we have a new process in design where we begin designing for the co-partnership of human and AI?
- How would AI fit in the world of co-creation, participation and community building? If we begin to utilize AI as independent machines rather than co-creatives—could we potentially stunt the strides made in design involving community input?
- When Timothy Johnson walks the audience through how to use the Sketchpad he says at some point, “He’s sloppy while drawing,” referring to the computer. In this moment, Johnsons shows how the penchant to anthropomorphize is not limited to the fantastical worlds of children.