Certainly Uncertain - June 1, 2018
Leave a comment

Protect the User: Designing for Security

By Jessye Holmgren-Sidell

As designers, we have, and should embrace, the powerful opportunity to construct customizable interfaces that help restrict government access and restore user autonomy.

Weare all activists now,” says cybersecurity counsel Jennifer Granick in her 2017 TED Talk. “And that means we all have something to worry about from surveillance.” She goes on to explain, in detail, how the American government collects our online data “easily, cheaply, and without warrant” (Granick, 2017). In 2013, Edward Snowden exposed thousands of classified NSA documents detailing the surveillance measures used on United States citizens. As designers, we have, and should embrace, the powerful opportunity to construct customizable interfaces that help restrict government access and restore user autonomy. And yet, there is still little protection in place to stop data collection from happening through digital platforms. To incorporate surveillance protection in the current User Experience (UX) design process, we must design for user safety rather than just efficiency, change the frequently hostile language and imagery we use to represent security, and communicate directly with security experts.

The US government acquires our data through online services and mobile applications like Facebook, Amazon, LinkedIn, and Google. What is not so apparent is that users often willingly provide access to that information. Ame Elliott, Design Director for nonprofit security organization Simply Secure, explains that UX designers create interfaces that utilize “the path of least resistance” (Elliott, 2018). LinkedIn, for example, asks new members to share their address book with just a simple click; it is far easier to hit the large “share” button than find the (much less apparent) “X” to skip that part of the registration. “The truth is people have no interest in using applications or websites,” says UX expert Paul Boag. “They are tools for a goal. [Users] want to use your website or application for the smallest amount of time” (Boag, 2016). In many cases, the path of least resistance forces users to reveal personal information. And the consequences can be disastrous.

There is no guarantee how companies will utilize or protect collected data and that uncertainty threatens user safety. In some cases, they pair shared information with machine learning to tailor experiences. LinkedIn generates specific job postings and suggested connections for members; Facebook’s algorithm caters ads and news to users based on recorded interests; Amazon utilizes user search history to better recommend products for its customers. In all of these cases, machine learning improves or, at least, streamlines user experience.

In March of 2018, however, The New York Times and The Guardian revealed that Cambridge Analytica “accessed data of about 50 million Facebook users” (McKinnon, 2018). Researcher Aleksandr Kogan designed a personality-quiz app for the social media platform that asked users for access to their profile pages. He then sent that recorded data to Cambridge Analytica to make 30 million voter targeting profiles. Facebook maintains the quiz breached none of their systems, but journalist Robinson Meyer explains, “It’s almost like Facebook was a local public library lending out massive hard drives of music, but warned people not to copy any of it to their home computer” (Meyer, 2018). A warning is not encrypted protection. Social media asks users to share parts of their lives online, but with the understanding that users control who views those shared moments. By following the path of least resistance and allowing a supposedly harmless quiz to access their profiles, millions of people involuntarily compromised their data. Facebook allows security measures to be outweighed by streamlined user experience.

Before designing for the path of least resistance, we should understand that path’s real purpose. LinkedIn requests users to share their address books to help connect them with employers and opportunities, but LinkedIn is also a service that needs members. By sending out invitations to everyone in a user’s address book, it reaches potential new clients who will have to register on the platform to accept the invitation. We also need to recognize what the path is bypassing. Users share their entire address books to avoid individually selecting who can view their profiles. That would be tedious and time consuming. In doing so, however, they give up control and lose autonomy over the process. Finally, we must consider the path’s consequences. These can range from users inundating everyone they know with LinkedIn friend requests to giving a “voter-profiling company” the data needed to target them during an upcoming election. With these considerations in mind, we can re-configure the path of least resistance to incorporate user safety, even if that just means making the “X” out option bigger.

Online security iconography and verbiage focuses so much on keeping threats out, that it forgets to let users in.

Security services frequently use negative language and imagery to represent their products. Ame Elliott calls this practice “the language of no” and maintains that it deters potential users from installing protective software (Elliott, 2018). Cybersecurity company Symantec, for example, offers defense methods that “Protect against tomorrow’s attack” and “Sharpen your responses after an attack and prevent the next one.” The website’s aggressive tone implies that users are responsible for security attacks because they were not “sharp” enough to recognize obvious threats to the system. Proficio, another cybersecurity service, represents incident response with a cross-hairs icon; the US Department of Homeland Security uses a picture of a lock to link to its cybersecurity overview page. These graphics attempt to scare clients into secure behavior—do not open that link, do not download that file, or attack is imminent. Online security iconography and verbiage focuses so much on keeping threats out, that it forgets to let users in.

We can change the “language of no” to the “language of yes.” “You don’t need to be a cryptographer to work in security…You don’t need to be technical,” says Elliott (2018). Indeed, designers are integral to the cybersecurity field because it seems so technical and unapproachable. And just because security involves technology does not mean our designs have to be technical or cryptic. We have the opportunity to help create products and services that encourage users to secure their data without resorting to scare tactics. TunnelBear, for instance, is a virtual private network that uses fun and friendly imagery to explain its functionalities. Images show the mascot, a cartoon bear, physically blocking users’ faces to protect them from online surveillance. “Browse privately with a bear,” the website reads. “It’s easy to enjoy a more open Internet.” The language is humorous, with no mention of “attacks” or “threats” to the system. TunnelBear makes security approachable and inviting, a practice we can and should utilize more frequently.

Our expertise and research can help prevent security experts from making assumptions about users’ behaviors.

In order to make cybersecurity understandable, however, we must first communicate directly with security experts. According to Sara “Scout” Sinclair Brody, the Executive Director of Simply Secure, “Neither security nor usability are binary properties. There’s a lot of grey area when it comes to whether something is secure or insecure” (Sinclair Brody, 2016). She explains that security experts ask, “Is this the most secure solution possible?” while designers ask, “Is this secure enough for my user, while not being restrictive?” (Sinclair Brody, 2016). It is, therefore, critical that we know how security experts are incorporating users and their needs into product development. As designers, we conduct interviews to understand how users want to move through an interface and then create personas. Our expertise and research can help prevent security experts from making assumptions about users’ behaviors.

Likewise, security experts can help make our solutions “as secure as possible” to ensure that we protect the users for whom we are designing. We need to understand security jargon to properly translate that information for non-expert consumers. We should be asking security experts questions to facilitate collaboration between our two fields. Sinclair Brody (2016) explains that designers must know to which security threats our shared project is most vulnerable and how its software will protect against those threats. We can then consider how users put themselves at risk and design personas that reflect those specific actions. This has already been put into practice by security and usability designer Gus Andrews, who created personas with a range of privacy concerns and potentially risky behaviors. He intended for them to “communicate user needs” to security experts in the terms those experts provided (Andrews, 2015). By utilizing personas like Andrew’s and continuing conversations with security experts, our designs will keep users secure without restricting their experience or following the path of least resistance.

We can avoid creating for the path of least resistance by determining the path’s real purpose, what it is bypassing, and the full extent of its consequences.

As the US government continues to collect citizens’ data without warrant—as LinkedIn uses the path of least resistance to remove autonomy, as Facebook “quizzes” convey data to voter-profiling companies without permission—we must integrate surveillance protection in our user experience designs. We can avoid creating for the path of least resistance by determining the path’s real purpose, what it is bypassing, and the full extent of its consequences. Additionally, we can improve users’ relationships with cybersecurity services by incorporating positive language and imagery, while also communicating directly with security experts on joint projects. In implementing these changes, we will create a safer, more pleasant online environment for users, thereby optimizing users’ experiences. Designers not only have the opportunity to alter the way people perceive cybersecurity, but the responsibility to invest our user-centric methods in protecting the public from surveillance.

Jessye Holmgren-Sidell is a Master of Graphic Design Candidate at North Carolina State University. She’s interested in inclusive design and its impact on design research methods. She also enjoys book making.

References

Andrews, G. (2015, April 14). User Personas for Privacy and Security. Retrieved from https://medium.com/@gusandrews/user-personas-for-privacy-and-security-a8b35ae5a63b

Boag, P. (2017, July 19). Users always choose the path of least resistance. Retrieved from https://boagworld.com/marketing/users-will-always-choose-the-easiest-option-so-if-we-want-a-competitive-advantage-we-must-focus-on-simplicity/

Elliot, A. (2017, February 28). Pre-Work Talk Berlin 02/2017 – Designing for Trust. Retrieved from https://www.youtube.com/watch?v=lOt_mc9FRDg&list=PLgKQebNo0trgNxpfvAF2u6KkybOeJju8l&index=5

Granick, J. (2017, April). How the US Government Spies on People Who Protest – Including You.  TED. Retrieved from https://www.ted.com/talks/jennifer_granick_how_the_us_government_spies_on_people_who_protest_including_you

Meyer, R. (2018, March 20). The Cambridge Analytica Scandal, in 3 Paragraphs. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2018/03/the-cambridge-analytica-scandal-in-three-paragraphs/556046/

McKinnon, J. D. (2018, March 20). FTC Probing Facebook Over Data Use by Cambridge Analytica. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/ftc-probing-facebook-over-data-use-by-cambridge-analytica-1521561803

Sinclair Brody, S. (2016, July 5). Talking Across The Divide: Designing For More Than “It’s Secure”. Retrieved from https://simplysecure.org/blog/talking-across-divide

Leave a Reply