Tay, the neo-Nazi millennial chatbot, gets autopsied - Ars Technica
Children understand early on that words are spoken in a context and both word and context interact to create meaning. If a child points to a fruit bowl and says ‘apple’ the child is praised and maybe rewarded with a snack. If the child points to a man walking his dog and says apple, the child sees its grown ups chuckle bashfully. Each invocation is the child sampling a space of contexts and testing an intervention in each; the more dissimilar the contexts, the more information is contained in the grownups’ responses. The resulting repetition is unremarkable when the word in question is innocuous. All this changes, of course, when the word is a ‘swear,’ and every query into the context space is consistently jarring to grownups. Children may exploit the jarring nature of a swear in all contexts, or simply be discouraged from testing any further.
But what if when sampling contexts the child determines that ‘swears’ are as unremarkable as the names of fruit? That despising people based on their race is what ‘we’ do? That we secure ourselves primarily by diminishing others?
Intelligence isn’t some magic pixie-dust that intuits ethical and moral norms, and to have that expectation is to apply a standard that we, all of us, would consistently fail to meet. Among ‘intelligent’ humans, these norms took thousands of years to develop and even today are unevenly spread. We all carry the gifts and scars of those who labored, reflected, and exploited before us and on our behalf. It should be obvious that a human child would assume a frightful character if consistently exposed to racism and misogyny. We can be all the more certain of this as we watch some of those children, now grown–barely, getting ready to vote.