<ul class="font_8"> <li> <div class="font_8">Analyzing consumers' digital footprints to determine creditworthiness</div></li> <li> <div class="font_8">New modeling techniques for fraud prevention</div></li> <li> <div class="font_8">Chatbots, CRMs and other AI-powered tools to improve the customer experience</div></li> </ul> [toggle title="TRANSCRIPT"] <div class="transcript-scroll-box"> 00:00Next we'll be hearing from Daniel. Daniel is the founder and CEO at trade calore Auto Group, which is a niche subprime lender that serves men and go File Hispanic consumers. Artificial intelligence and machine learning play an important role in trade floors operations, because it doesn't rely on credit bureau data that is traditionally used to measure the riskiness of a given borrower. During today's presentation, Daniel will explore the different ways lenders can leverage AI. Please remember to send any questions you might have through a mobile app. And with that, I will hand it over to Daniel, please join me in giving a warm welcome. 00:42 Thank you. Thank you. So as part of our model, we operate in a retail business and so usually I find myself presenting for auto conferences and this is a little bit of a unique opportunity. Usually an auto conferences to the general reaction from the room is used car business agents, what am I really gonna come out of here anything, any takeaways that are really meaningful. But really on a technology topic, I can tell you that that my experience is actually a little bit more limited and leads. We've been implementing artificial intelligence and machine learning and our business model for beginning of the testing phase now for about two years. But we're still just at the early stages of really recognizing across all the processes in our business model, how powerful it can be. So I'd like to share with you a little bit about what those actual practical applications are. And really, the the approach we cover how we got to the point where we could really implement artificial intelligence and machine learning and then a little bit of a view on the early results that we've been able to see. So in order to do that, I'd like to give you a little background on our business model. So we operate what we would call an integrated model, auto retail, consumer facing drive time, thing car Mart in the public sector. But we while we fall into the category of what is typically referred to as a biker pay here, model, we have developed an approach that we feel like is very differentiated. And really the core differentiator is the fact that we are financing a consumer with no credit versus impaired credit. And so just a little snapshot to give you a little bit of background on our business and in our business pies have, we've been around for going on actually 12 years. 12 years in business. Over that period of time we've we've sold around 50,000 vehicles, we've originated about a billion dollars in profit, so not as large Volume originator, but but definitely growing and definitely expanding in our region. We now operate about 36 dealerships across 14 markets. We're in Texas and Southern California, pretty saturated in Texas, probably in about a dozen markets. They're newer to California, predominantly in Southern California. But really what has distinguished our business among auto lenders and auto retailers that are engaged in this integrated model is the fact that we've been able to to execute and complete five term securitizations. So that capital markets access gives us a platform to talk about first, how we've been able to segment a target consumer that does not exist in the bureau. But now the follow along with that and explore how we can apply artificial intelligence and machine learning in order to better secondhand or at least providing a boost to our ability to segment that consumer. And so when we describe our customer, we would say if our customer goes anywhere else in the US, they would be subject to financing terms that would feel unrealistic in our opinion field credit or however, if that customer comes to us, we have developed a segmentation model that allows us to segment that customer who does not exist in the bureau into six very distinct grades we, we go from a plus through E. And we often say we have the ability to score the unscored double. Most importantly, what we can do is within this universe of borrowers that all really look the same to any other lender, we're able to determine. Among that universal of unattractive potential consumers, which ones 05:04 would be low risk buyers versus which ones would be higher risk. And so for the bars that then to us demonstrate or have the attributes that correlate with lower risk, we can in turn offer those consumers very, very attractive financing, financing that's attractive enough where our company has been positioned to some degree as an inclusion model. And so when you look at our business and how we have applied artificial intelligence and machine learning, we're not a technology company rather, we really have relied on technology to enable our ability to deliver value proposition are consumed in order to really execute this integrated retail finance model in the most efficient and sound manner possible. So this ability to segment this consumer gives us A tremendous tool not only that drives our underwriting processes, but drives our marketing drives off pricing really is permeated across all the major processes in our business. And we would point to the fact that we've been able to achieve an X to five securitizations in the capital markets as some validation. So while drive time in our company are really the only two companies in this buy here, pay here segment which is, you know, arguably serving the bottom of the food chain and the bottom credit tier in consumer finance as it relates to auto drive time and Tricolore the only two that have consistently access to capital markets over the last five years. We have become the only or really the first and the only lender in all of subprime auto finance to successfully issue a bond that's investment grade rated. Where the loans are originated. The borrowers are notified of school Effectively undocumented Hispanic immigrants who do not exist in the period. So we take that validation, we say, How can we? How can we expand? Or how can we deepen our ability to actually offer this customer real value? And the application of artificial intelligence machine learning? Our business first as it relates to underwriting is really been where we've seen the most problems. And so why does artificial intelligence work for us? Well, first of all, obviously, most of our borrowers have no credit score. As a result, there really is no sufficient third party dateable data that's available for a customer, we can get a name and address match in LexisNexis. There are some other alternative bureaus that track utility payments and other services that they We do access and have that have some overlap with our customer base. But generally what we're relying on is collecting data in the application process directly with the consumer face to face in the retail auto dealerships that we operate. So we're currently collecting over 90 what we call application attributes. These are attributes that we gather from the customer. And then to the extent we need to we verify in our back office operations, it's managed to Guadalajara Mexico. And so what we have learned as we've tested machine learning, is that it's powerful because of the number of non traditional attributes that we're trying to capture and integrate that into that decision making process. And it's complicated enough for what I've also learned is that traditionally All regression software does not effectively does not segment as effectively as potentially could in terms of grading this consumer So, so for us it has particular value because we cannot we don't have the benefit of relying on a credit bureau. And so, about two years ago, we have some talented data scientists that have had continued to evolve and refine and, and and refine our model over time. Those data scientist had some exposure to what machine learning can potentially do. So we began a process a couple of years ago of testing some different machine learning algorithms against our performance. Not only our 09:56 data on bars, but but bars who have been through In our portfolio for a minimum of 12 months, though, we actually had some performance data to tie to. And so we felt like two years ago, we were on kind of the bleeding edge of the application of this technology. What we've really learned is that the use of artificial intelligence really across financial services and across all industries really has exploded. So if you look at everything from VC funding, to the number of startups in artificial intelligence and machine learning to the number of times it's simply mentioned by companies today who want to, to position themselves as, as businesses that are utilizing leading edge technologies to just the pure growth of the number of patents that would be connected to some AI technology. You can see that the the application of this technology is not only real, but it is exploding really across all industries in the US and globally. And so, getting into our application, again, we've started exploring the value of machine learning and artificial intelligence in our risk processes. So, just to take you a little bit through that, what we what we really believe now is that that clearly, machine learning does offer tremendous predictive power in our ability to segment this consumer we look at it kind of in three components, we call the first structure. So unlike a typical regression model, which is linear in nature, these artificial intelligence, machine learning models are our neural networks and so they are at least glued replicating the processing of a human brain. What we have been able to to to understand is that by applying these modeling techniques, these neural networks are able to take these 90 non traditional loan attributes that we are collecting and gathering and in most cases and verifying for more potential bar and they are able to identify patterns that, that we were unable to identify using traditional regression software. So, for sure, doing simple regression analysis got us to a certain point, a segmentation model, ultimately a validation from capital markets investors, but what we're seeing now is the ability to really capitalize on that early development In terms of the idea that we could segment this consumer, and really with some degree of granularity, continue to improve. And so the whole idea of using more data in our modeling at the end of the day, our regression model takes those 90 variables and boils them down to about 10 attributes that the traditional regression algorithm weighs in our segmentation model. The machine learning model that we use today actually uses all 90 of those attributes. And so, people always ask us, okay, how does that fit in with the model that you already have in place? So we are collecting data from two sources we are collecting or we are going to the credit bureau to see if we have a if there is an issue If our customer gives us an idea of some security number, and it's less than half, but a portion of our bars do have a social security number or someone in their household does that we can, we can check credit so that we can check credit. So we have typically started with two primary sources of data, the third party data, to the extent the customer is a hit in the Bureau, and then all of that application data that we're collecting on every potential bar. Those two sets of data are fed both into our traditional regression model and into our AI risk model. So we are pushing that data from one or in the case of a hit two sources of data into both models in parallel 14:57 and then we have developed 15:01 Have a technology in order to take those two scores, those two independent scores and, and almost in a matrix type format, develop a a score that merges both our regression model and our risk model. So many people think, okay, the machine learning model really sits on top of the regression model to simply validate it, or to provide some gradient boost within an EDI segment. What we're actually doing is independently scoring that data in parallel through two different models and taking those outcomes and creating a score. And so if you were to scatterplot those results, and the x axis was our internal score that output generated from the merging of the AI risk model and our traditional regression model, that score would be on the x axis. And then default going from low to high would be on the y axis. And so you can see how roughly those six different grades would appear in a scatterplot. across that graph. What machine learning was able to tell us is that there were many, many outliers in what we believed was a fairly clean segmentation model. Why did we believe that it was clean because when we were to graph the Q net loss curves of each grade, we were always very satisfied and to a large degree encouraged by the fact that those q loss curves were distinct, and they certainly didn't overlap. But there was Nice separation between each curve by grade, what we learned when we took a more granular look, having the benefit of the AI model was that there were a lot of outliers that really were falling into grades that either seemed unattractive or actually really strong, but it really had an attributes that that we grouped into now what we call kind of a bad actor category. 17:29 And so 17:32 our approach has a backing up to to the implementation of that machine learning model really followed three steps. The first step was taking all of that applications data, and developing a number of different what we call features, features are really patterns in the data or scenarios and in that, in that within those, those two 90 attributions. And then we are taking that in building a model using the fact that we have the outcomes of consumed bars in our portfolio who have had at least say 12 months of performance. So now we can take all of those patterns and and analyze that against an outcome of a good or bad a binary outcome in terms of their performance as a bar. And then that hybrid model, again, is how the two come together. I'm not able to get into a technical discussion, unfortunately. But I will tell you that what we do with our data scientists is access a lot of code that is readily available. Some of its open source. I think TensorFlow is the google google library that's open source, but we're able to access code that's that really already gives That's a head start on being able to apply these AI algorithms. And then the the, the data scientists that we've engaged to actually build these models actually writes his own code, then that augments the code that we were able to access through tensor flow, and actually provide that, that the code that drives the algorithm and the decisioning process. And so how does that look in terms of results for us? The bottom line is in the early results, so call up the first nine months, we've seen losses reduced by closer to 3%. We've seen conversion increased by about 13 or 14%. 19:56 If you were to 20:00 If you were told, if you were only to pursue optimization of one of those two metrics, we would actually see better results. In other words, if we were satisfied with losses at the previous levels, we actually think we could see a conversion lift of about 20%. Conversely, if we were satisfied with our current conversion rate of applicants of clothes, we actually would see a net loss reduction that was even more attractive. So because we are trying to both optimize around q net loss and conversions, we end up with not as not as material impact but still, but still, for us pretty attractive for the kind of dispersed roll out test and so on. The chart that you see on the right addresses both of those initiatives or both of those metrics that we're really tracking and the significance here, so just to explain it, so on the left vertical, you have the six. Actually, we have here seven, because we've taken C and D and broken into C plus and seeing the f7 grades that are generated from our traditional regression model. On the top right and running left to right, you have the the 10 different grades that are generated from the AI model. And so here's where that matrix effect takes place. So what you can see is the top left is the most desirable borrower, the bottom right, obviously, the least desirable are a pluses are we have a gross lifetime to default on our a PLUS loans of 24%. But of that 24%, you can see that on the far left, it's for that category of combining a zero score on the AI model and our a plus a dose as low as 11%. And it goes as high under grade six as 42%. And so there is there is a lot of variance that the AI model has been able to identify just within that a plus grade that previously we were extremely satisfied. So if you look all the way to the bottom of the averages, the fact that on the on the loss rates, you go from the left side of 11% all the way to a number that's over 70% on the right, shows the the predictive power of this machine learning algorithms. So, to the extent that number was very good, their range was very tight, it would not actually be very predictive to the extent that number is is why that speaks to the predictive power. Similarly with conversion rate. And so our data scientists and again, this is I'm presenting to you this this is something is to show you the application and then ultimately how a data scientist would measure it. You hear data scientists, when they talk about the, the, their the segmentation power of a model, the predictive power of a model, you often hear them talk about a Ks factor or a Gini coefficient, I think they're really calculated. They're they're, they're really effectively the same calculations, the same inputs. But a Gini coefficient is a number between zero and one, expressed as a percentage, our AI algorithm Well, our Traditional regression model nhtv coefficient of somewhere in the low 60s, I think you'd put here 65%. But I think it was hovering in the low 60s. And so the way they calculate Gini coefficient that that straight line that you see on the graph is, is just would just be a random model. In other words, for every percentage of borrower there'd be a percent corresponding equal percentage of default. You want. 24:34 You want the curve to create as much area as you can. In other words, you want to maximize the area in an egg. And I think when they measure the Gini coefficient, it's actually that a times two is actually the Gini coefficients numbers. So for us, just to give you a comparative basis, are our algorithm prior or simple model was 65. It went to it's now on to almost 80%, which is, which is considered almost would be considered the best and classic subprime auto with the use of machine learning. And so, again, going back to, in theory, why works for our business, no credit bureau 90 application. data points are attributes because we collect applying a neural network to identify those patterns in those 90 attributes validated now by the fact that the Gini coefficient is what, at least our data scientists would say would be best in class in terms of what we've been able to achieve. So that would be kind of in short form, our application of machine learning, artificial intelligence machine learning big, you know, a subset of artificial intelligence, that would be more our application as it relates To underwriting what we have started to do beginning about seven or eight months ago, is examine every process in what we would call our vertically integrated business. Also, our business model starts with the, with the sourcing and purchasing of a used motor vehicle for us pretty late maliwan. An episode begins with our supply chain and, and, and continues through what we would say is part of our retail offices, the marketing the Legion, ultimately, selling underwriting originally alone and then servicing the loan. So if you look at these core processes in our business, what we did about six or seven months ago was trying to identify what makes sense as a natural AI initiative that we could apply to increase operating efficiency and leverage across our entire model. And so just to give you an idea, I'll show you a matrix of all the ideas we came up with. The ones shaded in green are the ones we currently have in progress. And so in supply chain, again, using the fact that these neural networks can process more data and identify more patterns. quick anecdote our consumer in Texas, or it's called 70% of our sales in Texas are large trucks and SUVs because of our our focus on that Hispanic consumer. And we have to buy those vehicles literally one at a time all across the country. We haven't a test in place to that takes dormice amount of data from Mannheim from Odessa, Pedro auctions as well as hertz enterprise, some major rental car companies and sources regionally and sorts it and take that those patterns against the different patterns that our retail stores have demonstrated in terms of inventory turn across these different models and body types, all the way down to kind of event level and we've been able to develop the ability to add some predictability with our inventory planning and also in our supply chain purchase a little bit more efficiency so that that process has has just been to a large degree already been validated and it's, it works we haven't really done much yet with appointment setting. Although we do use AI in our servicing. We have a lot of data in our loan servicing platform related to when we reach customers when we didn't reach customers, when we left the messages, we now deploy a messaging platform that allows us through one single platform, I think it's called Live 29:15 assistance, 29:17 where we are able to actually communicate with our customer using one platform, whether they prefer Facebook Messenger, WhatsApp, iMessage, SMS, etc. So we're using AI to actually optimize our call center and identify best times to reach the consumer when they're active on social media so that we can reach them through Facebook Messenger, for example, if that's their preferred choice, so we do have an initiative in our call center 29:47 to 29:49 over the next three years is double our portfolio size without increasing headcount in our call center and I think we've had encouraging results that the application of machine learning will be Value value created, create value for us there. So natural language processing, there are actually a lot of applications to that. But the ones that we have been using 30:15 are really around 30:16 problem and complaint resolution, customer care, customer advocacy, using machine learning to identify when a customer when a customer call is going to escalate into a complaint, being proactive in addressing customer issues, customer complaints, reading language patterns, actual keywords, distinguishing between the call agent to the customer, etc. Some pretty interesting stuff there. And dawn who can go on business as our CTO that now serves as our president ceo who's actually here and he would, he knows he has a lot of experience, area, among others. We already talked about As it relates to risk and loan originations, just the power or the predictive power that that is gained through the application we are learning or we are experimenting are beginning to understand how we might apply it. In a with our sales process, where we are currently doing a lot of things around customer care and customer advocacy, in our call center, 31:30 kind of under the topic of what we would call collaborative systems. 31:36 Again, just a recognition of patterns upon large data sets, 31:45 optimizing some aspects of our sales process, and then we currently do some things with our in partnership with our GPS provider. There are obviously privacy laws that Create some limitations around that date of the for example, we, we now use a GPS technology that alerts us 32:09 when there 32:10 is an accident. And we're actually taking some of that data going back and seeing if there one pattern what other patterns that customers who end up having chronic problems with either accidents or 32:25 impact or so forth, 32:27 maintenance issues, etc. So there's some opportunity there. And then lastly, just this whole idea of crowdsourcing and so we do have an initiative right now. We have a good partnership with AutoZone. We have AutoZone parts centers in all our reconditioning centers that serve the the greater kind of chain of retail dealerships. But these inside those dealerships, we're sourcing all new parts. We have this crowd We're, we're testing this technology that can source parts of our body shop from recyclers versus factory parts versus parts through AutoZone versus other vendors and providers. And so, through machine learning, we've already achieved some cost efficiencies in those processes. We think of the call it 1800 dollars that we spend reconditioning every vehicle that we purchase before we sell it, the parts come call them and that is about half of that, or $900. We think we can make a meaningful dent. One thing that we have done just as, as an aside as we tried to take a lot of these cost efficiencies, if you look at overall the value we provide the customer, we believe we provide the customer with tremendous value because we're selling it at a margin and our retail business is 33:58 about half of our competitors. financing the interest rate, 34:02 that's less than half of our competitors. We've been able to do that because we've been able to apply technology, Applied Technology to our underwriting and risk process has allowed us to access capital markets financing driven down our cost of funds, we pass that along with the bar. So we're financing no FIFO consumers at 9%, as low as nine to 12%. Interest rates, which is really unheard of in our industry are part of and a main reason like deep subprime. Similarly, as we see results from a lot of these initiatives, we believe we will take a lot of those cost efficiencies and in turn pass them along to deliver 34:43 more value for our customers. 34:44 So beginning with risk, and now trying to give you a little bit of a sense for where we see opportunities for the technology across all our processes. 34:56 This would be at 34:58 least a good general view on One, what we see is really the power of that technology. 35:04 I think we're 35:06 talking about right there. So 35:19 thank you so much for the presentation. And I just wanted to start off the q&a session by asking a couple a couple of follow up questions. So the first one I want to ask you about referring to the table you have that you were just showing, where you lay out some potential areas and applications for AI in your business's operations. Were might be the next area or an application of business, you intend to integrate AI. 35:46 So our biggest focus, now that we feel like we have a direction in terms of our risk processes would be in our call center. So our call centers are accomplishing multiple purposes. Initial customer care, customer service customer advocacy going into and then in the loan servicing, and then also marketing both 36:12 inbound and outbound. 36:14 There are a lot of opportunities, and there are actually a lot of machine learning technologies currently available to really improve efficiency in those cost savings. I would say that's the biggest opportunity for us. 36:28 And what about some of those white boxes that you haven't started to implement or integrate yet? 36:34 So we are continuing trying to prioritize, we're trying to make some decisions internally on on 36:40 really how much investment we're willing to make into some of those technologies we, we have, we obviously have some limitations and just being able to really implement those. And so as we prioritize, I think we'll we'll end up both identifying some options Or some new boxes, but also probably 37:03 differing. 37:05 And so what is the newest application of AI in your operations? So 37:11 the that application we talked about last year for us, we apply this crowdsourcing concept could is is really powerful and we we liked kind of the early results that we've seen there because for us as a lender, the underlying collateral is just so vitally important. That's really the strength of what we think our business model really capitalize on that is the fact that we operate this direct to consumer brand. So as a direct to consumer brand, financing company that's actually has a brand with the consumer. It's really important for us at that underlying collateral support. We really think that that machine learning technology 38:00 will enable us to make tremendous improvements 38:02 in the overall quality of the product that we offer. 38:07 We have some questions from the 38:08 audience here. So 38:11 what is your definition of default when building out your models? 38:16 So default in RS is we actually use a a little bit of a conservative definition. But so it's it isn't binary, it's either good or bad. And we consider a bad any loan that is more than 90 days. That goes more than 90 days past due. So as soon as it hits 90 days past two weeks, it's considered a bad for the purposes of default. 38:43 And the partner presentation where you compare the traditional credit scores and AI model scores, what drives their deviating results, can you isolate the data points within the AI risk scores that drove that difference? 38:58 So there are some data points is 39:00 a little bit of that as our secret sauce. But, for example, the drivers, and among our, what we would call non traditional attributes that we're capturing all ultimately relate to the stability of the bar. So, for example, we're going to measure the 39:23 time. 39:24 To the extent they've changed jobs, we're going to measure the amount of time when they left one job when they started a new job. Most of our customers to the extent they were ever say, terminated in the morning, have a job in the afternoon as opposed to some very specialized 39:43 trade where if 39:46 they get laid off and the economy's just okay, it takes them a long time to get a new job. 39:51 So they're a little 39:54 agitated a little subtle levels, data points like that, that we capture 40:00 That all but ultimately they all relate to stability. 40:04 And how has AI helped with trade wars cost of operations? Does more robots equal less people? 40:11 Sure. So for sure 40:15 all of our initiatives have 40:17 some 40:20 target outcome in terms of either headcount reduction or growth with minimal increase in headcount. So we wouldn't say robots, we would just say actual just the technology and the efficiencies that provides but if, if there was any, for me, if there was any takeaway that I would want to impart, you know, to this group, it would be that, that we should all be investing and exploring ways to to gain efficiencies in our business processes through machine learning and to the fact that our business which is not a technology business, just one that As, as observed the upside of being tech enabled, we've already seen tremendous cost savings, low loan loss reduction of 3% is a material 41:15 outcome for us. 41:18 What feedback have you received from the CFPB or other regulators in regards to the AI ml model you have developed right. 41:27 So, most people in in especially finance and consumer finance are familiar with Hudson Cooke. They, they seem to serve almost every lender in our space. We went through a process a pretty extensive process with them 41:46 prior to really launching any 41:50 testing or 41:50 development of an AI model, and and I think one of the important one of the important guidelines that they gave us was that, to the extent that we knew that these 90 attributes that we were collecting, all were compliant and permitted under the various statutes, that we not go outside of that universe's data in terms of what we are allowing this neural network to use in terms of identifying these patterns. So the the idea of using machine learning, and then opening up a lot of alternative databases without really vetting, I think some of those specific data fields of those those databases might include was given to us as an area that would signal some area, some great service, but the fact that we had already vetted those apps attributes with us and cook and and conclude the collecting that data was applied, simply applied by technology. Were all we were doing this but that same data set identifying the patterns, I think you can then or satisfying answers CFPB around, call it 43:25 right? How long does that process take some 43:28 of your integrations so I think a little over four months but 43:35 they did some they did some other work to split off with our data scientists who's working on just to understand what the process was that would ultimately lead to generating 43:52 these applications for your AI enhanced model to serve other underserved demographic segments. 44:00 So, we we do hope and we we're trying to be a little bit. We're trying to be somewhat proactive and being involved with other organizations that serve the low income consumer, the underserved consumer, not just Hispanic, even a broader category. And we actually have had some conversations about some collaboration. But I think again, the message is that if we're going to, if there's an opportunity for companies to leverage technology to really remove what we think is a very low ceiling, on access to financial services for that bar, there is an opportunity with machine learning to really lift that ceiling for a lot of the borrowers who currently can't access traditional credit. Because with a segmentation model with the identification of low risk, call it no file or thin file consumers. I think there's 45:02 How do you separate economic impact from model impact when evaluating loan loss reduction? 45:11 So, 45:13 when we look at loan loss reduction, we look at it. So if we're we are trying to achieve lumbar flexion. And as I mentioned 45:24 in parallel with 45:27 conversion, so it's a good question that for us, they really go hand in hand. 45:35 We have discussed actually just trying to use machine learning as a, as a tool just to only crude to increase conversion and we continue to try to explore that and discuss it but I think they really go hand in hand for us. 45:54 And among your AI ideas, why did you not pursue the driving pattern idea, what benefit would you have expected from the future? 46:04 So we didn't pursue driving parents simply because 46:08 we were advised that potentially they potentially would raise some concerns around privacy, consumer 46:19 interest integral part of our model is, is a concept we call cost of contact, we are always through different messaging platforms in contact with our consumer. But we 46:32 did not want to have to, 46:35 we did not want to incur the risk of opening up the data related to, for example, that GPS device to any 46:45 any populations that would potentially, you know, violate privacy concerns. 46:52 Right. There's another question here in regard to that GPS device. Are you pulling in GPS technology? data to larus. 47:03 We currently don't. 47:06 We did some research on that. 47:12 And it looked like it would work. But again, because of the privacy concerns, we didn't pursue it. Obviously, if you have through GPS technology, the ability, which is the case now used to be GPS devices would ping a vehicle maybe once every 24 hours. Now, some of them can give you constant tracking on the vehicle. So obviously, if there's a car moving around in Korea, you would be less likely to extend credit to that individual than one who's moving around who was parked from, you know, hate an APM and their house. And so, you know, using it for behavioral scores for future financings for offering other products would be work, but 48:02 just seems a little bit 48:04 like a danger zone. 48:08 And so how did companies score risks before you integrated machine learning into our current process? 48:15 So 48:16 it wasn't it isn't an algorithm whether it was based on a set of linear relationships between 48:25 data points. So it was it was it. And there is a result of that, because it was a series, it was just the additive effect of all these linear relationships. It did create those, those fairly dramatic outliers, but it was really it really was a traditional risk model, still relying on attributes that we would say, are unique to an auto finance underwriting process. So you know, obviously the key data points tied to the rescue. type of employment. But again, we've identified other attributes that we know correspond with a borrower's stability at the end of the day. That's really what our model is measuring. It's just a stability measure. 49:15 And the last question here so we've spoken a lot about the different opportunity areas where we can apply AI in auto finance today. Is there any area of the business that you would say is not ready for AI that you've seen people or companies and auto finance attempt? 49:34 So we have partible of some initiatives that 49:41 because it's so 49:44 it's heisting to be able to leverage a tool like, like a neural network to to process an enormous amount of data. We have seen some 49:57 proposals and 49:58 some ideas from companies that, 50:01 again, I think, 50:03 would I would characterize it as wreckfest potentially pull in data that would, that would create some issue from a regulatory standpoint. So 50:16 I guess that would be an example. 50:19 Since those pain points kind of came from a regulatory standpoint, do you think the industry will ever be ready for those those types of applications or, or not so much? I think 50:30 so. 50:32 The application of machine learning in in I think we have a fairly good sense of word is in in the industry right now in its lifecycle. I think it's so early that yes, I think those possibilities would definitely exist, but 50:49 there's, there seems to be a lot of 50:55 firms like Hudson and Cooke being proactive in terms of 51:01 advising companies to tread slowly in terms of the implementation of that. I think it's good advice. But I think over time, you know, companies will carve out a compliant path to using more of the technology. 51:18 Okay, great. 51:18 Well, we're definitely out of time here. So I just want to thank Daniel again for the wonderful presentation </div> [/toggle]