A.I. scans clients' faces for fraud
In a since deleted tweet from its corporate Twitter account on Monday, Lemonade described using A.I. to “pick up non-verbal cues” in videos that customers are required to provide that help the company detect signs of insurance fraud. As part of Lemonade’s insurance claims process, users send the company videos of themselves “explaining what happened,” the company said in its original tweet.
The tweet was part of a larger thread in which Lemonade discussed using data science to lower its loss ratio, referring to how much money it pays out in claims versus how much it brings in.
A number of Twitter users criticized the company’s post, saying that it appeared as if Lemonade uses A.I. to determine a person’s emotional state or to read their facial movements to judge whether they're more likely to commit fraud. Some users likened this technique to phrenology, a long-dismissed theory that measuring bumps on a person’s head can shed insight on their overall personality and behavior.
Amid the uproar, Lemonade published a blog post on Wednesday dismissing its earlier statements about A.I., saying it “does not use, and we’re not trying to build, AI that uses physical or personal features to deny claims.”
The company went on to deny using use emotion recognition technologies or use A.I. to “automatically decline claims.” It added that “harmful concepts like phrenology and physiognomy has never, and will never, be used at Lemonade.”
Lemonade said that the “term non-verbal cues was a bad choice of words to describe the facial recognition technology” that it uses to “flag claims submitted by the same person under different identities.” “These flagged claims then get reviewed by our human investigators,” the company added.
"It was wrong of us to write that in the first place," a Lemonade spokesperson told Fortune. The spokesperson said that its use of facial recognition was "described accurately" in an older blog post.
Jonathan Vanian