In May 2021, Twitter, a platform notorious for abuse and hot-headedness, rolled out a “prompts” feature that suggests users think twice before sending a tweet. The following month, Facebook announced AI “conflict alerts” for groups, so that admins can take action where there may be “contentious or unhealthy conversations taking place.” Email and messaging smart-replies finish billions of sentences for us every day. Amazon’s Halo, launched in 2020, is a fitness band that monitors the tone of your voice. Wellness is no longer just the tracking of a heartbeat or the counting of steps, but the way we come across to those around us. Algorithmic therapeutic tools are being developed to predict and prevent negative behavior.
Jeff Hancock, a professor of communication at Stanford University, defines AI-mediated communication as when “an intelligent agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication goals.” This technology, he says, is already deployed at scale.
Beneath it all is a burgeoning belief that our relationships are just a nudge away from perfection. Since the start of the pandemic, more of our relationships depend on computer-mediated channels. Amid a churning ocean of online spats, toxic Slack messages, and infinite Zoom, could algorithms help us be nicer to each other? Can an app read our feelings better than we can? Or does outsourcing our communications to AI chip away at what makes a human relationship human?
You could say that Jai Kissoon grew up in the family court system. Or, at least, around it. His mother, Kathleen Kissoon, was a family law attorney, and when he was a teenager he’d hang out at her office in Minneapolis, Minnesota, and help collate documents. This was a time before “fancy copy machines,” and while Kissoon shuffled through the endless stacks of paper that flutter through the corridors of a law firm, he’d overhear stories about the many ways families could fall apart.
In that sense, not much has changed for Kissoon, who is cofounder of OurFamilyWizard, a scheduling and communication tool for divorced and co-parenting couples that launched in 2001. It was Kathleen’s concept, while Jai developed the business plan, initially launching OurFamilyWizard as a website. It soon caught the attention of those working in the legal system, including Judge James Swenson, who ran a pilot program with the platform at the family court in Hennepin County, Minneapolis, in 2003. The project took 40 of what Kissoon says were the “most hardcore families,” set them up on the platform—and “they disappeared from the court system.” When someone eventually did end up in court—two years later—it was after a parent had stopped using it.
Two decades on, OurFamilyWizard has been used by around a million people and gained court approval across the US. In 2015 it launched in the UK and a year later in Australia. It’s now in 75 countries; similar products include coParenter, Cozi, Amicable, and TalkingParents. Brian Karpf, secretary of the American Bar Association, Family Law Section, says that many lawyers now recommend co-parenting apps as standard practice, especially when they want to have a “chilling effect” on how a couple communicates. These apps can be a deterrent for harassment and their use in communications can be court-ordered.
In a bid to encourage civility, AI has become an increasingly prominent feature. OurFamilyWizard has a “ToneMeter” function that uses sentiment analysis to monitor messages sent on the app— “something to give a yield sign,” says Kissoon. Sentiment analysis is a subset of natural language processing, the analysis of human speech. Trained on vast language databases, these algorithms break down text and score it for sentiment and emotion based on the words and phrases it contains. In the case of the ToneMeter, if an emotionally charged phrase is detected in a message, a set of signal-strength bars will go red and the problem words are flagged. “It’s your fault that we were late,” for example, could be flagged as “aggressive.” Other phrases could be flagged as being “humiliating” or “upsetting.” It’s up to the user if they still want to hit send.