麻豆国产

Nonprofits are in trouble. Could more sensitive chatbots be the answer?

Body

In today鈥檚 attention economy, impact-driven organizations are arguably at a disadvantage. Since they have no tangible product to sell, the core of their appeal is emotional rather than practical鈥攖he 鈥渨arm glow鈥 of contributing to a cause you care about. But emotional appeals call for more delicacy and precision than standardized marketing tools, such as mass email campaigns, can sustain. Emotional states vary from person to person鈥攅ven from moment to moment within the same person.聽

Photo by Getty Images

and , professors of information systems and operations management at the at George Mason 麻豆国产, believe that artificial intelligence (AI) can help solve this problem. A well-designed chatbot could be programmed to calibrate persuasive appeals in real time, delivering messaging more likely to motivate someone to take a desired next step, whether that鈥檚 donating money, volunteering time or simply pledging support. Automated solutions, such as chatbots, can be especially rewarding for nonprofits, which tend to be cash-conscious and resource-constrained.聽聽

鈥淲e completed a project in Minneapolis and are working with other organizations, in Boston, New Jersey and elsewhere, but the focus is always the same,鈥 Sanyal says. 鈥淗ow can we leverage AI to enhance efficiency, reduce costs, and improve service quality in nonprofit organizations?鈥澛

Siddarth Bhattacharya. Photo provided

Sanyal and Bhattacharya鈥檚 (coauthored by Scott Schanke of 麻豆国产 of Wisconsin Milwaukee) describes their recent randomized field experiment with a Minneapolis-based women鈥檚 health organization. The researchers designed a custom chatbot to interact with prospective patrons through the organization鈥檚 Facebook Messenger app. The bot was programmed to adjust, at random, its responses to be more or less emotional, as well as more or less anthropomorphic (human-like).

鈥淔or the anthropomorphic condition, we introduced visual cues such as typing bubbles and slightly delayed response to mimic the experience of messaging with another human,鈥 Sanyal says.聽聽

The chatbot鈥檚 鈥渆motional鈥 mode featured more subjective, generalizing statements with liberal use of provocative words such as 鈥渦nfair,鈥 鈥渄iscrimination鈥 and 鈥渦njust.鈥 The 鈥渋nformational鈥 modes leaned more heavily on facts and statistics.聽聽

Over the course of hundreds of real Facebook interactions, the moderately emotional chatbot achieved deepest user engagement, as defined by a completed conversation. (Completion rate was critical because after the last interaction, users were redirected to a contact/donation form.) But when the emotional level went from moderate to extreme, more users bailed out on the interaction.聽聽

The takeaway may be that 鈥渢here is a sweet spot where some emotion is important, but beyond that emotions can be bad,鈥 as Bhattacharya explains.聽

Pallab Sanyal. Photo provided

When human-like features were layered on top of emotionalism, that sweet spot got even smaller. Anthropomorphism lowered completion rates and reduced the organization鈥檚 ability to use emotional engagement as a motivational tool.聽聽

鈥淚n the retail space, studies have shown anthropomorphism to be useful,鈥 Bhattacharya says. 鈥淏ut in a nonprofit context, it鈥檚 totally empathy-driven and less transactional. If that is the case, maybe these human cues coming from a bot make people feel creepy, and they back off.鈥澛

Sanyal and Bhattacharya say that more customized-chatbot experiments with other nonprofits are in the works. They are taking into careful consideration the success metrics and unique needs of each partner organization.聽聽

鈥淢ost of the time, we researchers sit in our offices and work on these problems,鈥 Sanyal says. 鈥淏ut one aspect of these projects that I really like is that we are learning so much from talking to these people.鈥澛犅

In collaboration with the organizations concerned, they are designing chatbots that can cater their persuasive appeals more closely to each context and individual interlocutor. If successful, this method would prove that chatbots could become more than a second-best substitute for a salaried human being. They could serve as interactive workshops for crafting and refining an organization鈥檚 messaging to a much more granular level than previously possible.聽聽

And this would improve the effectiveness of organizational outreach across the board鈥攁 consummate example of AI enhancing, rather than displacing, human labor. 鈥淭his AI is augmenting human functions,鈥 says Sanyal. 鈥淚t鈥檚 not replacing. Sometimes it鈥檚 complementing, sometimes it鈥檚 supplementing. But at the end of the day, it is just augmenting.鈥