Moral Implications

Machine Learning and Artificial Intelligence

The moral implications surrounding Google Clips revolve around the two parts utilized in its operating system: machine learning and artificial intelligence. In essence, machine learning and artificial intelligence create two problems. First, the purpose of machine learning and artificial intelligence is to impersonate people by performing functions humans can, whether that is “recognizing faces, or translating from one language to another” (Corder, 2018). Second, machine learning generates algorithms much too complicated for humans to understand (Corder, 2018).

The implications of these two problems relating to Google Clips are serious. First, does it capture people of different skill colors equally (Fowler, 2018)? More broadly, will Google Clips equally include all types of individuals in its moments it considers “important?” Research and instances in the past have provided unfortunate results, as Google Photos algorithms identified black faces as gorillas in 2015 (Fowler, 2018). These algorithms are created off of millions data pieces on people, and “come to represent all of these people and, often, even their most subtle or unconscious, bigoted beliefs (Corder, 2018). Furthermore, due to the breadth and extent of the algorithms, it is nearly impossible to spot potential immoral consequences (Corder, 2018). Although Google has since responded to the 2015 fiasco, and claims more inclusive AI is being built, a true solution might rest in developing algorithms that are trained for morality (Corber, 2018). This will be incredibly difficult, “because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process” (Polonski, 2017). If in fact morality algorithms are created by humans, whose morality will be used as the foundation for such a system?

References

Corber, D. (2018). Ethical Algorithms: How to make moral machine learning. Retrieved from https://medium.com/qdivision/ethical-algorithms-how-to-make-moral-machine-learning-e686a8ad5793

Fowler, G. A. (2018). Google’s first camera isn’t an evil all-seeing eye. Yet. The Washington Post. Retrieved from https://www.washingtonpost.com/news/the-switch/wp/2018/02/27/googles-first-camera-isnt-an-evil-all-seeing-eye-yet/?noredirect=on&utm_term=.11e0aa8943e6

Polonski, V. (2017). Can we teach morality to machines? Three perspectives on ethics for artificial intelligence. Retrieved from https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3