May 9, 2022
Blog
AI
,
Product Strategy
,

AI Ethics on Privacy Issues and Bias

AI Ethics on Privacy Issues and Bias

Artificial intelligence (AI) is the ability of machines and robots to do tasks just like humans. It is artificial since it is man made as opposed to human intelligence, which is natural. These products have the capability to do work autonomously without human intervention. That can be a good thing because humans can then just rely on these machines to do mundane tasks so that they can do other things like spending more time with their families. However, with great power comes great responsibility. Having machines with artificial intelligence has its ups and downs.

AI presents two major areas of ethical concern for society: privacy and surveillance as well as bias and discrimination.

Privacy and Surveillance

When you are using technology like AI, most of the time you are unknowingly or unwillingly revealing your private data such as age, gender, location, preferences, etc. Companies collect your private data, analyze it, and use it to give you a much better user experience.

AI may utilize personal data without the permission of the individual to create new content. Some applications of AI like deep fakes, where AI creates a picture or video using a person's likeness, are on the rise and are concerning. For example, I saw former President Obama had a video saying things he did not do in real life. It was all AI generated. It was all fake. Although it was interesting to see an AI generated video, it's an invasion of his privacy. A person's private or personal data needs to be kept private and not be used just for any purposes without the consent of the individual. Similarly, facial recognition tools are also invading our privacy. Law enforcement can use it to catch criminals but some people argue that we shouldn’t be using it because they don’t want the government doing mass surveillance on its people. There has to be some limitations to what data the government and companies can collect, how they collect it and what they do with that data. Product managers need to study carefully the uses of AI and its repercussions.

Bias and Discrimination

Companies nowadays use AI to filter out job candidates. Companies automate the hiring process because they believe AI will simplify the process of hiring the best candidates that match for the positions they want to fill in, but how can we ensure that digital systems are not biased against groups of people because of race, gender, or other identities? AI can be biased if the humans who design it factor their biases into the design. The articles from The Verge and USA Today show that companies are more likely to hire white men compared to black men and more likely to hire men than to hire women. For example, Amazon has a failed experiment with a hiring algorithm in that it replicated the company’s existing disproportionately male workforce. What happens is that AI uses historical data to make predictions. If  an industry has been predominantly white and male for decades, then the historical data that AI uses inherently gives a biased result. Black people and women are historically less likely to be hired by companies, so without diverse sets of data to train AI with, it will likely continue to be biased. While AI provides lots of benefits for us, product managers should be aware of its shortcomings so that it serves its purpose well.

Artificial Intelligence helps us accomplish tasks that are complex or tasks that humans don’t want to do, but it needs to be handled with care. With tons of data being produced, there are infinitely many uses of it. In order to build better products, product managers need to adapt these guiding principles: transparency, accountability, honesty, fairness.

Transparency, Accountability, Honesty and Fairness

Transparency - Product managers need to be transparent about what and how they collect the data and how they use the data they collect.

Accountability - Product managers should train the AI properly.

Honesty - Product managers need to acknowledge their biases and let the team know how one’s own biases can affect AI’s statistical models and algorithms.

Fairness - Product managers need to make sure that people with different backgrounds are also represented well in the AI models they build since AI is built with historical data and can favor a group of people over another.

Product managers need to avoid the unwanted and negative consequences of AI by respecting people's privacy and acknowledging personal biases. Product managers should consider carefully how the products they build impact people’s lives and should adapt the four guiding principles mentioned above to build better AI.


John Santos is a volunteer with the BPMA blog team. He is a PM who has nonprofit experience and is passionate about making an impact in people's lives.