Cambridge Union from Cambridge, UK, is a 200-year-old debating society, one that has played host to some of the biggest leaders in the world and some of the most famous political and cultural debates. But it’s just taken a huge step – allowing a non-human participant in a debate.
Project Debater, IBM’s AI software, was used to help two debate teams as they faced each other, debating the merits of AI and whether it will be more harmful than good. IBM’s software can extract arguments from text or audio and categorize it, summarizing it through synthesized voice.
This isn’t the first time the software has been used; in February it faced a world champion debater, Harish Natarajan on the subject of whether preschool education should be subsidized by the government and, although the human debater narrowly won the debate, Project Debater took its arguments from more than 400 million articles published on the internet.
This time was different; this time, Debater AI showed off something IBM is hoping to sell to businesses. Taking in over 1100 arguments, both for and against AI, submitted to IBM several days before the event, it categorized all the arguments, discarding some, and sorted them into five themes.
For each of those themes, the AI software presented supporting arguments, leaving the human teams to slog it out afterward.
According to the lead engineer on Project Debater at IBM, Noam Slonim, the technology would soon be made available to certain cloud computing customers. Their vision is to help businesses better understand their customers and employees and for governments to understand their citizens – whether they will take any notice of them is a different matter altogether!
IBM says that this is an example of the way humans and AI can work together, rather than against each other, in the future. And it also shows off just how fast natural language processing progress is being made.
Another computer scientist at IBM, Dan Lahav, says the machine was deliberately fed racist and obscene comments to see if it would repeat them – it didn’t. The NLP was that advanced that it weeded out all comments not relevant to the or not persuasive enough for the debate.
This has failed in the past. I’m sure many of us remember a Twitter chatbot called Tay; then Microsoft chatbot had to be withdrawn after it learned to tweet abusive and racist comments, learning from similar comments posted by others. Not Microsoft’s finest moment!
Project Debater is by no means perfect but then, neither are humans. As to whether AI will do more harm than good, Neil Lawrence, a professor of machine learning at Cambridge University, says that he believes it will do more good but we should never ignore the potential for harm.
“You are better off assuming it is going to do more harm over the next 10 years because then you watch out for the pitfalls.”