A recent article on The Verge describes how a college student recently used an autoregressive language model and deep-learning to produce texts that appear to have been composed by a human. Out of 26,000 viewers of the blog only one was able to identify it as a mechanical algorithm, based on a perceived lack of substance.
One Senatorial committee defines Artificial Intelligence as “the ability to perform tasks that normally require humanlike thinking,” including the learning of new tasks. While this is a good start, it does not account for the careful observer’s critique of the blogpost: “Zero substantive content, pure regurgitation.” But, how did that human observer recognize substance? What was different about the composition—syntactically, lexically, etc.? Somehow human intelligence remains distinct from its machine-generated counterpart.
To have the human sensory experience of a tree from which one derives the idea of a tree is a phenomena of our non-material faculties. And, while words on the screen may have been produced by a machine, their lack of mortal substance (and there’s the rub!) ultimately revealed something amiss, something less than human intelligence
AI is surely an excellent tool for content analysis, tracking the movements of users, and adapting and filtering content. But when it comes to writing intelligent, creative content, that can only be done by something intelligent, something human.