With respect to understanding how designers and design play a role in complex technology. It is very related to how we think of the future and design. There are so many biases and problems that come up if the implications are not thought of upfront. Algorithms are non-binary but people relate to tech as an infallible being. So even when these biases creep in, it gets harder to make people understand that it is the tech that faltered them.
The quote I resonated with the most from the workshop is-
“Design can be an equalising action that distills code and policy into understandable interfaces.”
This actually makes sense because as designers we should answer some questions before we begin designing for AI/Ml technology and they are-
How to design for Transparency?
How to design for Auditability?
How to design up-keeping ethics?
The questions as a designer we must ask are- What data are we gathering? Who is it for? What will be its use? Does it have privacy concerns? If data is gathered do the people whose data is being gathered know about it? Should we let people choose to be manipulated? (we should let people choose to be manipulated, so long as the outcome is positive (Koerth-Baker 2013). But it is not clear what constitutes a positive outcome)
I liked Caroline Sinders data finding strategies where she explained the importance of looking at the bigger picture through the New York 1 million tree plantation example. Unless you look at all the surrounding data you can only understand one part of the picture I think this came up very well in the recent wired article I read- (https://www.wired.com/story/artificial-intelligence-makes-bad-medicine-even-worse/) When looking at data sets the things that must be included are a political context, cultural context and map out big trends that happen. They tend to influence a lot of big trends and are very important. Then on a related note with the data sets, she talked about solving machine learning problems by having a diverse team and considering all the edge cases. Edge cases are formed by testing it with all kinds of people the algorithm is meant for.
The second part of the workshop got really interesting when she discussed mechanical Turk and it started by mentioning the gig economy and ethics. I enjoyed how Caroline & her team worked through the process to calculate how much a person should be paid! I think it all boils down to simple questions
DO WE CARE ENOUGH TO MAKE THESE MODELS LIKE MECHANICAL TURK MORE ETHICAL? HOW DO WE BRING IN GOVERNANCE IN ML/AI SYSTEMS?
This got me thinking and I think I am really interested in trying to reimagine some technology(probably Spotify) with all the questions above answered. Caroline talked about how Spotify protects its data and we don’t get to know what is going in the Algorithm! Although it doesn’t seem that the data Spotify takes from us can create a privacy threat but again I think I want to take it as a challenge to think how Spotify will look if it is to be made more transparent, ethical, and auditable. May be look for it in the new post by me 🙂