Mark Zuckerberg and Facebook are in quite a pickle.
The Cambridge Analytica scandal that has roiled the country this week has exposed a basic truth: that privacy and Facebook’s business model are at odds. Amp up the privacy and data protections, and the social network that fuels Facebook’s growth is stanched. Pump up the social network and privacy, and data security is compromised. In 2014, Facebook with its Facebook Connect initiative clearly decided to pump up the social networking on the platform. And that’s how it got to Cambridge Analytica.
Which brings us to auto finance and fintech broadly.
Auto finance and fintech have pursued artificial intelligence for years now. The promise of AI is that it serves customers and lenders in an automated fashion and at a lower cost.
But what is AI exactly? Simply put, it is the use of data en masse to extract meaning. On the surface, that meaning should be of value to consumers.
There are examples of auto finance companies putting themselves into position to extract voluminous data on a wider range of behaviors. Both Bank of America and JPMorgan Chase & Co. are not just lending money for the purchase of cars, but literally facilitating their purchase (see here and here). Such endeavors potentially expose these banks to a far wider range of data that goes to motive, behavior, and preference within the car purchase, not just the borrowing for that car.
But AI can go too far, extracting too much meaning from data. In the case of Cambridge Analytica, it employed during the election what it calls “behavioral microtargeting,” which analyzed data to predict the behavior, interests, and opinions held by specific groups of people and then served them messages they were most likely to respond to. But all this inference can get too personal and too uncomfortable as the public has discovered over the last few days. And its disputed too, by the wider data science community, on the efficacy and ethics of this type of microtargeting. In auto finance, for example, will consumers be OK if a lender, say, deduces that if the consumer uses an Uber to travel more than 15 miles to a pawnshop that he will likely default on his auto loan?
And that is really the message from Zuckerberg’s squirming this week. At the heart of this Cambridge Analytica matter is the uncomfortable, ethically confusing reality of too much data and a lack of privacy protections surrounding it. In financial services, the drive over the last couple of years has been to unlock more and more data. A sister website to the Center, Bank Innovation, recently held its annual fintech conference in San Francisco and much of the discussion there centered on how financial institutions can unlock and harness more data. The who, what, where, why, how, with whom, and for how much, — of not just every transaction by every consumer, but for every enterprise a consumer engages with — is becoming increasingly available for analysis/cross-selling/risk management/customer service by financial institutions. Is there good to come out of these massive volumes of data? For sure. But as Zuckerberg has discovered, there is also downright evil.
Even in auto finance, a quiet corner of the global economy to be sure, that evil still may come. It is advisable for the auto finance community to be aware of the potential downsides to massive data. Before it gets messy.
Join us to learn more about operational excellence and compliance at the Auto Finance Performance & Compliance Summit, May 9-10 in Dallas.