We are steadily moving towards AI that thinks as a human. We are seeing some of it already– when you go on the internet after binge watching a Netflix series, you will see ads that are similar to the program you watched. It may be annoying and like “Big Brother,” but you are much more likely to be interested in those targeted ads than random promotions for diapers. AI is also used by email providers to filter spam.
AI is significantly impacted by the exposure to information that is biased, which caused the debacle of Microsoft’s Tay Twitter chatbot in 2016. Designed to interact and learn from other users, within 24 hours the experiment was withdrawn as Tay quickly learned – and repeated – racist and other inappropriate comments. Additionally, AI technology evaluates a mass amount of historical information which results in biased results. For example, you want to source your next great executive and you use criteria of other successful executives in your company or field. The problem is that the results will likely show a white male because, statistically, that is the demographic that has held most executive roles regardless of their ability.