How AI (Like ChatGPT) Is Disrupting Finance and Reshaping Investment Decisions with Joyce Li
In this episode of Future Finance, hosts Paul Barnhurst (aka The FP&A Guy) and Glenn Hopper welcome Joyce Li, a seasoned investor, AI strategist, and entrepreneur. Joyce shares her fascinating journey from being a CFA and managing multibillion-dollar investments to transitioning into AI and tech strategy. With nearly two decades of experience, Joyce now helps board directors and business leaders develop AI strategies and governance frameworks.
Joyce Li is a seasoned investor, entrepreneur, and AI strategist. With an MBA from Wharton and a master’s in Computer Science from the University of Virginia, Joyce has spent almost 20 years managing investments and guiding businesses in technology adoption. She now advises leaders on developing AI strategies and governance frameworks. Joyce also co-founded an AI-powered financial planning platform, where she learned the challenges and opportunities AI can bring to business.
In this episode, you will discover:
How Joyce’s background in finance led her to focus on AI
The real challenges businesses face when adopting AI
Why trust is crucial when introducing AI into decision-making
The speed at which AI is changing business models
How to build a successful AI strategy without getting lost in the tech
Why businesses need to focus on their goals before diving into AI tools
Joyce Li shared valuable insights on how AI is transforming business strategy and decision-making. Her experience in both finance and technology has given her a unique perspective on the challenges and opportunities businesses face when adopting AI. Trust, transparency, and a clear understanding of business goals are essential when integrating AI into decision-making.
Follow Joyce:
LinkedIn - https://www.linkedin.com/in/sjoyceli/
Website - https://averanda-ai.com/
Join hosts Glenn and Paul as they unravel the complexities of AI in finance:
Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii
Follow Paul:
LinkedIn - https://www.linkedin.com/in/thefpandaguy
Follow QFlow.AI:
Website - https://bit.ly/4fYK9vY
Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.
Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.
In Today’s Episode:
[01:46] - Welcome to the Episode
[04:02] - Shifting Focus to AI Strategy
[10:52] - Lessons from Co-Founding an AI Platform
[12:42] - Building AI with Limited Resources
[16:12] - Generative AI and the Future of Robo-Advisors
[20:44] - AI’s Role in Startup Growth
[23:23] - Why Startups Are Winning Big Clients
[25:03] - Redesigning Workflows for the Future
[32:26] - Why Big Companies Hesitate
[38:24] - Describe AI’s Personality
[42:58] - Wrapping Up the Conversation
Full Show Transcript:
[00:01:46] Host 1: Paul Barnhurst: Welcome to this week's episode of Future Finance. I'm again joined by Glenn Hopper and we also have Joyce Li with us. Joyce, welcome to the show.
[00:01:54] Guest: Joyce Li: Nice to be here.
[00:01:56] Host 1: Paul Barnhurst: Yeah, really excited to have you. So let me give a little bit of background about Joyce. Joyce is a CFA and a seasoned investor, board director, entrepreneur, and AI strategist with nearly two decades of experience managing multibillion-dollar investments and serving on boards. Joyce now advises board directors and business leaders on the development of AI strategies and governance frameworks. Recognized as a top artificial intelligence voice on LinkedIn, Joyce holds an MBA from Wharton and a master's degree in Computer science from the University of Virginia. Impressive background. We have two Wharton people here, I think. Right. You did Wharton as well, didn't you, Glenn?
[00:02:43] Host 2: Glenn Hopper: So I was going to say, Paul knows I'm normally just arrogant and cocky and think I'm the smartest guy in the room everywhere I go, you know? But then, Joyce, you come in here, and now I'm like, suddenly imposter syndrome is kicked up for me. So I'm really looking forward to our conversation here, because your background, there are not a lot of people, I think, that really exist at that kind of perfect juncture between technology and finance, where you can talk as a domain expert in both and with your background and your experience. I mean, I think you're the pinnacle of those two right there. So really excited to have you on and talk to you today.
[00:03:19] Guest: Joyce Li: Thank you. Thank you for the kind words. And now I don't know what to say to match that description.
[00:03:25] Host 1: Paul Barnhurst: I have a feeling you'll do great.
[00:03:27] Host 2: Glenn Hopper: Yeah. So looking at your background and when Paul and I were talking before the show, I know I can picture how you got here, but, you know, the first part of your career as a CFA, you're focused on investments. And then now as technology has moved along, you've switched and you're really focusing on AI. And because you had the masters in computer science, I imagine that you kind of look towards everything with that sort of logical sort of CS sort of engineer developer's approach. But can you walk us through the transition and when, like when you realized it was time to kind of make this switch from, you know, primarily focused on investing to now it's time to really lean into the AI part of it?
[00:04:12] Guest: Joyce Li: Yeah, absolutely. I know it's always a little bit surprising when people see my LinkedIn and say, what's going on? For me, it's not really a switch. It's like natural evolution. I started coding at the age of nine, and then as you mentioned, I had a computer science degree. So I always, always have that foundation knowledge inside me. And in the period where I was an investor, I actually spent a lot of time researching the AI usage in business models, of course, for almost ten years. It's all about, you know, big data, machine learning, e-commerce, social media and all that really high end, almost exclusive club of I usage. And I never even thought I understand the economics of that model really, really well. Like how much you need to spend, what time lag you have to wait until you see the return on investments, all that stuff so that you can imagine my shock when I came across ChatGPT at the end of 2022. Actually, that was at a cocktail hour, a networking hour, where I ran into people from OpenAI at that time, really a small startup not many people knew about AI completely blown away. I know that feeling has happened to many, many people in the world later, but because I understand the economics so well for the last generation of AI usage, I completely see what the potential could have been with this cheaper and more accessible type of AI. So therefore, I decided at that time that this is what I'm going to do next. Of course, it still takes me a couple of twists and turns to get where I am now, but the direction is very clear for me.
[00:06:02] Host 2: Glenn Hopper: Yeah. And similarly, you know, it was sort of I kind of liked that barrier to entry before generative AI came along of. And there are several other people out there and, you know, Paul Christian Martinez, who I'm thinking of here, where there's those people who you can write Python and you work in finance and it gives you this sort of extra advantage. And it was more of an exclusive club. I say that kind of half jokingly, because now with generative AI, it's your natural language becomes the programing language of the future and the fact that we're opening up, I think it makes the industry better. I mean, when you give more people access to the tools, but even though they don't understand programing, one of the things that I focus on a lot is if you haven't studied it before, you need to lean into data science and understand what we're doing. You don't have to write the Python, but you have to know this is regression, this is clustering. This is classification and all the components of that so that you know the right questions to ask. And when you're talking to companies now as an advisor here, are you seeing that people understand kind of the potential here. I mean, I know everybody's talking about it and they get it sort of in this vague area. But when you get in and start talking specifics, what's the kind of response been from from the people you're talking to?
[00:07:15] Guest: Joyce Li: So if you are talking about what needs to happen or what they need to know before the level of questions they ask, I would say for the audience I face, usually people start with very sort of boxed in. What can this do? Can this make my workflow easier? It's okay. I mean, everyone starts somewhere. I would say it's a very natural for people to think about low hanging fruits, workflow, productivity enhancements, but I always want to encourage them to think about. Really imagine what's the end goal here. Like what if it's a CFO position, then it would be what's the end sort of business return that you're looking for. If it's a, you know, sales leader that may be what's the, you know, the sales target that you are trying to achieve. Think that and then backwards and say what needs to happen to reach that. Usually that opens people up. So you mentioned data science. It really we can even say it's a decision making framework. And of course for Glen, you are looking at it with a data science lens for me is always looking at it from an almost investment analysis lens.
[00:08:34] Guest: Joyce Li: And for someone else could very well be what's my competitive edge? Like how do I maximize that type of angle? I think the beauty of AI is you don't really need to be, you know, have this one set of skills that fits all. The great question asking criteria. So and then going back to what really needs to, I encourage people to learn a little bit before they really think about the next level of AI fluency. how would you evaluate if a result is good or bad? Think a little bit like again, like going to the end goal. How do you evaluate that end result is good or bad, and then come back to oh, in order to evaluate this, maybe in certain areas, I need to really understand a little bit more basic concept of, you know, decision making data science concepts. The models are very high level, because that's needed for me to decide whether this reasoning model generates good analysis versus the other one. Right. So I hope that answers your question. I sort of went a little bit over in terms of the framework I usually use.
[00:09:45] Host 2: Glenn Hopper: Yeah. No. And I think that's actually a great answer. And I did talk to him. We had him on this show before, I believe. But I was talking to someone several months ago about when you trust AI and when you trust the responses and how AI can lose our trust by giving a wrong answer. And I thought he summed it up pretty well. And he said, you know, if you hire someone, if you hire ten employees and they mostly gave you the right answer, you wouldn't fire any of the employees because they gave you a wrong answer one time. But you also just like with your employees, you're not going to just take whatever they give you at face value. You're going to use your own intuition and expertise and evaluate that. And, you know, as the answers come in, it's not like if you're plugging numbers into a calculator and it's deterministic and, you know, eight times five is 40 or I hope I did that math right, that'd be funny if I got that wrong. But but it's because it's generating new content. I mean, it's just like with a person, sometimes it's going to be a wrong answer. But I thought it was a great point you made that your assessment of it and the kind of the human in the loop remains very important.
[00:10:50] Host 1: Paul Barnhurst: Totally agree that human in the loop is huge. I want to ask you a question about how you made the transition. You know, you kind of learned a lot and it's had its kind of bumps and things. I think one of the things you did is early on, you co-founded a company around AI, an AI powered financial planning platform, and it looks like you pretty quickly pivoted away from that because about seven months or so. So talk about that experience and what you learned from that whole experience.
[00:11:18] Guest: Joyce Li: Absolutely. First of all, I want to clarify, the financial planning here that we're talking about is financial planning for private wealth, not corporate finance.
[00:11:27] Host 1: Paul Barnhurst: That's what I thought. Yep. Yeah. We should make that clear. Thank you.
[00:11:30] Guest: Joyce Li: Because I know your audience are a lot of them in the financial planning area anyway. So we had a vision and going and continuing that realization that AI is very different this time around, we feel like, for example, the private wealth advisory industry has been doing what they're doing in similar ways for long, long, long, long time. Sure. And one thing that some of these high net worth, but low liquid asset professionals cannot get a holistic financial planning is because, again, that business model doesn't work in existing economics and setup. So what we were thinking is with AI, we can by definition, we can have unlimited scenario planning tailored to your needs. It may not be monetary. It could be completely different type of needs that AI is able to make that work. Regardless of what you need or how special you need is, without thinking about the economics of the advisory firm. Does it make money or not? So that's the whole landscape. And I would say one thing I realized, and me and my co-founder and I realized it's how different building in AI is versus building in traditional. You know, before Gen I. The reason being, we used to think we have to hire a team of engineers before we launch our MVP, because we have all these great thoughts about different parts of the modules. And we also have this ambition of fine tuning our AI model.
[00:13:07] Guest: Joyce Li: At that time, it's llama two plus, using, of course, OpenAI APIs as well. What we ended up realizing is with all these really interesting tools and open source foundational, you know, some of the code base out there, we are able to pack all these sort of demands, technically speaking, that used to maybe you need three people or four engineers to work with between the two of us, with both of us code. But we waste a lot of help in the open source community. I feel like this is tremendously eye opening for me, and I would shamelessly say, although it lasts seven months, but we felt we packed maybe traditionally two years of startup experience into it. And what I learned, and that goes into why we ended up not continuing, is technology is the easy part. The building trust, the channel. You know, the conversations are the tougher part. And at that time, if I run by the cloud now, it's hard to believe. But two years, about 18 months ago, at that time, people were very skeptical about trusting what I gave them. Even we show them side by side how the planning can save them, for example, tax in certain scenarios. They just don't believe AI is that good. So they need another human to do the whole calculation again and then show them that's that's the that's actually a good answer.
[00:14:41] Guest: Joyce Li: So we feel like the timing is not great. But again like what I learned is trust is so important. And also how do you not deny the shortcomings of AI, which is making us stuff from time to time, you know, sometimes doesn't give you a straight answer like going around and around. How do you make that a advantage rather than a disadvantage? How do you build that sort of brainstorming around that? And lastly, I would say, because it's almost like building on a quicksand, one model, I think Glenn mentioned this before, one model you use three months later when we are at second generation of our MVP, the model has completely changed. So we realized you should not build a complicated workflow around a existing AI I model capacity, you can always assume that will change. That really forced us to learn how to focus on the problem you want to solve, rather than falling in love with the solution you are, you feel you are elegantly building because that solution will be irrelevant two months from now. Yeah, so that's my learning. And all of that really fell into, by the way, where I'm doing right now because I feel like, oh, I've been through that pain and I know now how to tell from, you know, signal versus noise. I do feel I have a little bit more confidence in doing that.
[00:16:12] Host 2: Glenn Hopper: I've always wondered about that because I think of it from a broker dealer perspective or like Wealthfront, you know, they've had their robo advisors for, for years. And I think about using generative AI and the fact that they can hallucinate. And I think about so many things you read where it says, you know, I'm not licensed, obviously different for broker dealer, but so many things you read that it's like, you know, this is not financial advice. I'm not licensed to give this or whatever and everything sort of the caveats that go around it. But then at the same time, you know, University of Chicago last year did a paper, and this was even before ChatGPT for this was using ChatGPT 3.5 without even using a code interpreter. This was just a straight LLM. The study at University of Chicago. It was better at predicting directionally earnings per share for companies than human analysts were for you know, there were some criteria in there, like human analysts were better for startups and stuff because, you know, the AI needed the data to go behind it. For startups and small companies, humans did better. But I think about that and the access that we have and sort of Wealthfront, just as an example, because who everybody's heard of, but the whole idea of democratizing sort of this financial advice and if you can put it, you know, if you can have these robo advisors so that people who don't have access to a personal financial advisor and everything, it's it seems like a great resource.
[00:17:32] Host 2: Glenn Hopper: But that's that's a real problem to solve, is I mean, I know we're getting better at hallucinations and there's guardrails that you can put around it and everything, but it's a it's an interesting spot because we know and I think maybe people who understand what's happening under the hood a little better might understand it better. But that's a lot of trust. Because I say I say all the time, yes. Yeah, we're trust but verify everything. But if you're talking about your, you know, your investment strategy, that's a lot to really lean in to the, to the robo advisors to, to hand over the keys to them anyway. Maybe maybe better. They give advice and suggestions and you take that just as you do anyone else's. But don't let them trade on your behalf. I don't know, I don't know what to.
[00:18:11] Guest: Joyce Li: Yeah, I do think trust building requires a lot of things and many fields. We put AI to a higher standard than human. When they make mistakes, we really want to put them in the penalty box and then fix them, which is I think is fair. That's why we can leverage AI, but I do think that trust element has to be built around framework, around evaluation, around audit. Really happy to see that this is now, you know, being worked on by many, many players in the ecosystem. If we use the financial planning as a, you know, a teaser because I work on it, but really it's all in, you know, Fpna. Right. And how do you trust the conclusion recommendation? How do you trust due diligence? All that. And I see it everywhere now. Now that people feel AI is not going away, let's make sure that it works for us.
[00:19:10] Host 1: Paul Barnhurst: Yeah, I agree, there's definitely more of a push to be able to validate now because it's not going anywhere. And to a certain extent, there may always be a little bit of the black box concerns, but so much more, right? Give me your sources. Tell me the logic you use. You know, deep research comes back with 30 sources and you can see on the side with ChatGPT at least how it was thought. So you're not feeling completely like, okay, uh, hopefully this answers right, especially if asking about something you don't know, but at least you can kind of go read an article and start to get an idea. So it's getting better. But yeah, it's a big concern, especially FP and a we uh, we like to see all the numbers, right. As we like to say give me Excel, give me my spreadsheet. Don't give me a black box. And you know, I to a certain extent is the black box. It's it's exciting times. I know I've been excited for it. And you and I have talked a little bit about how technology and AI is really changing the landscape, especially for small companies. We've all heard the term, it used to be said is nobody ever got fired for buying IBM, right? That old tagline of, hey, just be safe. I'm always going to go with the big player because I know at least I can hide behind that.
[00:20:25] Host 1: Paul Barnhurst: Well, I went with the industry leader, but we're starting to see some companies early stage series A landing, some public or pre-IPO. I've seen one that landed the largest retailer in the world. You talked about, you know, another one. I know you know of where they landed. A large company that's getting ready to go IPO. How much of a role do you think AI and tech is playing in that change that we're seeing?
[00:20:51] Host 1: Paul Barnhurst: Ever feel like your go to market teams and finance speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Meet QFlow.ai, the strategic finance platform purpose built to solve the toughest part of planning and analysis B2B revenue. Q flow quickly integrates key data from your go to market stack and accounting platform, then handles all the data prep and normalization. Under the hood, it automatically assembles your go to market stats, makes segmented scenario planning a breeze, and closes the planning loop. Create air tight alignment, improve decision latency, and ensure accountability across the team.
[00:21:58] Guest: Joyce Li: I've been thinking about this a lot because for me as an investor for so long, especially in public companies, it's so rare for me to see that sort of risk taking while traditional sense risk taking. Move to adopt a new vendor so early on their development. So that's why it really piqued my interest and did a little bit more research and thinking around it. So here are my hypothesis. And I'm really eager to hear your feedback and thoughts on that, both of you. So one is again, like everyone is building on quicksand, right? So as I mentioned, like the underlying AI model becoming so good to ignore, first of all, the decision has been made. You have to consider AI in any sort of things you do. You can choose not to end up using it, but you have to consider. Right? So that's the we're already at that stage. And the second thing is AI native startups are building very, very fast. And that match again that when when the end users are also dealing with this quickly evolving landscape, they want to if they really, truly focus on the problem, then they are not going to fall in love with their existing solution. So therefore they want someone who can really help them to build that solution and grow together.
[00:23:23] Guest: Joyce Li: And I think when you and I talked about this early startup landing a deal with pre-IPO, I think that's the biggest number one reason, because the client is growing so fast and waiting for existing vendors to work on it three weeks and come back with a, you know, a proposal or a solution three weeks later is just not going to work. And therefore they are willing to open up this window to test out the new products. And the second thing is, I do think the entire ecosystem is moving to enable that, because if you think about the data layer, the cloud player like Amazon AWS and others are putting together privacy guardrails, data security guardrails. So everyone is taking those like the heavy lifting into their layer in the implementation. So when the application layer comes in, the startups are able to say, okay, we work with these vendors who are already solving the tricky security privacy. And then we got certified by them. So they trust us. Therefore the trust again, the trust building is accelerating. I do think everyone is working towards that whole goal. And lastly, this is developing hypothesis. So need to hear what you think. I do feel a lot of us in these, either leadership position or management position, do have that ambition to make things better, work more efficiently, and think about how things can be completely revamped to, you know, take a step further.
[00:25:03] Guest: Joyce Li: You said people say they don't get fired by, you know, recommending IBM. But these days a lot of people are doing what they do only because their bosses or their, you know, bosses and bosses from 20 years ago have been doing this way. So they inherited it, but they hate it. Right? So it's almost at home. You are have all these interesting apps to play around and at work you're playing a software maybe with the concept from the 90s. So there's always this ambition in some of the leaders, more forward looking leaders to look at this as an interesting window to assert that agency and think about, okay, how do we make this company more forward, ready and more more future ready and giving my domain knowledge now that I assume I can make anything happen, how would I redesign my workflow? How would I redesign my staff organizational structure? And that's an opportunity. I do see some really ambitious leaders start to step up and say, this might be more worth considering, and that creates the opportunity for new vendors again, like work together. Now I want to hear what you think?
[00:26:20] Host 1: Paul Barnhurst: Yeah. So I'll go and then I'll let. I'm sure Glenn has some thoughts here too. You know, on the first one I agree, the agility, the flexibility. Right. These new companies that are AI native from the beginning and they're small are much more agile and can address needs much quicker. Where in the old days, you know, a big tool had a lot more functionality and, you know, you went with them because they had all that. And that's still generally true. True, right. If they've been around 15 years, they have more. But that small tool can develop what you need. Those couple things are missing much quicker than they used to. I think the one that you didn't mention, I think plays a role and it's been there, but it's, you know, become much bigger is, you know, these smaller companies, there's so much capital out there now if they're really successful, there's not that concern that they may not run out of money or not be able to find somewhere to raise capital, because there's always been that, well, will they be around in a year? And I so I think that's helping some. I don't think that's a big part. I think it's a little bit security. I think you're right on right, because everybody wants you to have that soc2 and all those different security compliances. And that's a 6 to 12 month journey. If you're doing that on your own. It's hard. It's a lot of work and it takes time. So having the cloud player doing it for you and connecting to it and then saying, yep, we trust you. That's an easier, streamlined process and we'll get you in the door with bigger companies where before you didn't have the money or the time to do it like you can today. And then your last point, I would add one thing to it. I think the ambitious is a big part of it. I was talking to somebody else, and they also mentioned a lot of the leaders now are becoming those that grew up in a digitally native environment.
[00:28:01] Guest: Joyce Li: That's right.
[00:28:02] Host 1: Paul Barnhurst: So the older solutions just aren't enough, right? They grew up with the phone in their hand and a tablet and seeing things that were possible. And they're just like, why are we doing it this way? And so I think that's a little bit of it as well, as you mentioned, the ambition. And so I think there's a much more willingness to be open because they see the benefits, even though it may be a much smaller player saying, I trust them, I can see how it will help our workflow, help our employees. It's a risk I'm willing to take because they grew up in an environment where they saw technology move so fast.
[00:28:37] Host 2: Glenn Hopper: I'm going to come at this from a slightly different angle, because I've spent the bulk of my career helping businesses in the SMB space, and I want to go, let's say let's go way down, like businesses less than 25 million a year in revenue. And these businesses don't have a lot of resources. Some of them barely have fpna. They're probably in QuickBooks or Xero there. You know, they get the canned reports. It maybe takes some 30 days to close their books. And if you can come in and shorten that and give them even like the base level of epi and a they're excited But they're getting the same kind of pressure that mid-cap and larger companies are for implementing AI. And they also maybe it's been my experience. They have an expectation that AI is going to be more of a magic wand and that it's just you just come in and wave the AI magic wand and it fixes everything, but you've got to do so much work under the hood. And it's not sexy work. It's not fun work. It's like, you know, some of these businesses, it's like, well, first we have to get your chart of accounts straight. We're going to do that. And that's nothing to do with AI. That's just getting our data right. And then it's getting data in order. And then kind of coming up with the data dictionary and everything. But for these SMBs, I'm telling them for the most part, look, let's get your data straight now, because the SaaS providers are going to eventually have generative AI rolled into their tools, and it's going to be available to you, then what they think they want, they just it's it's too out of reach for them.
[00:30:09] Host 2: Glenn Hopper: It's very bespoke right now because like I give pitches to clients all the time and I've stopped. I've kind of changed my shift. What I give to mid-cap companies versus what I give to an SMB company or two very different things. So for a mid-cap company, I love to go in and say, okay, look, we're going to go in snowflake. We're going to use cortex. Let me show you how we can go in and integrate the LLM into. We're going to pull your GL data. We're going to analyze it. We're going to do this. We're going to pass it through and do this. And then or we're going to do something similar in in Azure or break down and look at your GL. Well for the or like the, you know, OpenAI just rolled out the new SDK for agent building and all that and super cool. You know basically the response tool is going to deprecate the assistance, which I've got so many things set up using those assistants. I'm, I know they're giving us to like mid 2026, but I'm a little worried about this. But you know for an SMB if you don't if you're not a tech company and you don't have developers on staff, you don't know you can build a GPT, but you can't build an assistant. So a lot of this stuff is out of reach for you. But there are tools out there that to the point you guys are making, they're they're coming online really quickly and they can do this stuff.
[00:31:18] Host 2: Glenn Hopper: So if you've been a QuickBooks customer for years and some AI powered new tool comes along and you're, you know, you're not a public company, you don't have the audit requirements and all that. You might be pretty quick because how long is it going to take into it? How long is it going to take Oracle. How long is it going to take, you know, these ERP providers to implement the, you know, to incorporate AI. And if I really want to use it and some tool comes along and I've been on QuickBooks for ten years, but this new tool has AI built in. I might make that switch really quickly, especially if it's using AI to help migrate data, my historical data, and bring it over into this new system and everything. So it's what I wonder is how long it's going to take for these kind of the battleships, the players that have been around since the 90s and have all this information and have all the market share, how long is it going to take them to get beat out by these new kind of young upstarts. And a lot of people, you know, if you're a public company and you've got audit requirements and all that, you're not going to just dive in and jump on the bleeding edge technology that's out there. But for a lot of businesses, there may be solutions coming sooner than what they'll get from the systems they use today.
[00:32:26] Guest: Joyce Li: Although I would just add one point, it's for the larger companies. They could stay put for the existing vendors for a long time, but then for certain specific needs, they may, you know, plug in someone that's new just to make that specific needs. Either it's supply chain or whatever it is, um, make it a lot more modern.
[00:32:48] Host 2: Glenn Hopper: Yeah. And if they have, you know, if they're if they have a data lake or whatever, and they can they can have a whole other layer. It's kind of the AI layer that they treat that's outside of their basic ERP or EPM or whatever. But to your to your point there, I mean, everybody's getting pressure. Everybody's got this sort of FOMO about, oh, if I don't get if I don't get AI, I'm going to get left behind. And, you know, there's a sense of it, but also a lot. At the same time, people are saying, well, what can generative AI do? Am I going to be able to reduce headcount? Am I what what's the how am I going to get the ROI on it? But you you recently wrote about that. The fed survey that showed adoption of gen AI has been quicker than traditional AI, machine learning, or even PCs. So with all the kind of uncertainty and and the sort of non-deterministic nature of it, why do you think it is that the adoption has been so fast on this compared to those sort of historical, you know, like open AI? I can't remember what the numbers of how quickly they got to 100 million users or whatever it was compared to, like Spotify or Instagram or technologies that came before.
[00:33:53] Guest: Joyce Li: I may be dating myself, but I feel like I experienced that early adoption of PC, internet and all that. Yeah.
[00:34:00] Host 1: Paul Barnhurst: I remember all of it too.
[00:34:03] Guest: Joyce Li: But but I would just say on the on the Maybe a personal use from the reason is pretty straightforward. One is it's just requires a lot less technical know how to get started, and also it requires a lot less money to get started. I know now it's hard to imagine, but when the internet first came out, it's really expensive to get connected with the dial up, right? And then when the PC first came out, it was also a couple thousand dollars per machine. So it's very, very, very expensive. It's not like you can just stop by and buy one type of decision. And now we sometimes joke that I feel like on days I feel like I'm the chief of staff between all the AI chatbots, because I sort of test this out and then what do you think? And then, oh, that chatbot thing this way. Would you, would you disagree at. It's almost like I can pay for pay version of all this together. I don't, but if I pay it's not a lot of money per month. And and it's very, very accessible. But on the culprit side, I would say going back to our, uh, our language, the ROI is very quick to justify. So especially on the functional domain specific usage. For example, we talked about, you know, accounting side of things.
[00:35:30] Guest: Joyce Li: Some of these tools are very quick to implement but also easy to evaluate. I think that's a big part a lot of people are now stuck in. How do I evaluate and be certain that to Paul's earlier point, this black box produce the right result, not just now, but ongoing after I implement it? Um, but for some of the tools in in some of the more black and white rule based functions, it's very easy to, to evaluate, and that helps to build the confidence and really help the adoption. And lastly, I would say the competitive pressure is real. It's unlike the internet or or PC like people can just move a little bit slowly to see what's the right way. You almost drive your own car at the speed you prefer, but now it's all at least in my conversations. It's constantly what are you seeing? What are other people doing? What are other companies thinking about over the next year or two? Of course, it's all high level discussion without disclosing any details, but really showcases that everyone feels like this has such huge impact on their business model, their pricing, their competitive landscape, market share, and also their reputation as either a, you know, technology enabled or technology left behind company. Right. So yeah, so you do see that urgency a lot more.
[00:37:00] Host 1: Paul Barnhurst: Amazing to watch that urgency and just how quick and I think, you know, cost. There's a number of things you mentioned there that play into that. But it's so different that speed of which we've seen compared to prior tools and software. And I don't see that changing. It feels like every day it accelerates, right. There's something new all the time. And so appreciate that. It's been a great conversation we're going to move to. This is kind of our what we call our fun section, where we ask some personal questions. And how we do it is we give your bio the questions we came up with. And in this case, I believe I gave it your Substack, your newsletter website and said, come up with 20 fun and kind of unique questions to ask our guest. So I use ChatGPT 4.5 for this. And so Glenn and I have a different way of asking our questions. We have one.
[00:37:53] Guest: Joyce Li: I'm honored. It's been some you spend some money on token on.
[00:38:00] Host 2: Glenn Hopper: Yeah.
[00:38:01] Host 1: Paul Barnhurst: There you go. Yep. All right, so here's how this one works. You get to pick a number between 1 and 25. Or you can let the random number generator pick. And that's the question I'll ask you.
[00:38:15] Guest: Joyce Li: I'll pick 12.
[00:38:17] Host 1: Paul Barnhurst: 12. All right. Let's see what we get for 12. I'm not even sure what this question is. Ah, this is kind of a fun one. Yeah, I had a personality. How would you describe it?
[00:38:28] Guest: Joyce Li: That's an interesting one, because you sort of feel like I has different personalities across the board. Right? Different chatbots have different personalities.
[00:38:36] Host 1: Paul Barnhurst: How about ChatGPT? If you want to pick one, we'll go with ChatGPT.
[00:38:40] Guest: Joyce Li: I actually use Claude more. So let me just say Claude.
[00:38:44] Host 1: Paul Barnhurst: Then I use Claude quite a bit too. I really like Claude.
[00:38:48] Guest: Joyce Li: Okay. Um, I would describe it this way and then see if you like it or not. I have a teenager, uh, high schooler boy. So you can imagine what I'm dealing with. I feel like Claude has that personality, as if Claude were me and I were the teenager. So I would just keep saying that you're not good enough or tell me something. That's it. How do we change this better? And Claude always almost like acting like my mom as a teenager. Be super patient, be super professional, and really care about how I feel. So yeah, you know, sometimes you just, you have to say to call you be direct, pointing out where my weaknesses are or where the statement weaknesses. But I sometimes feel like cannot help that Claude is trying to be me as I were to my teenager.
[00:39:45] Host 2: Glenn Hopper: Well that's great. Uh, that's a good answer. It's funny when you compare them, because Claude has its own sort of guardrails and things that it won't do. Chatgpt put, I put a picture of someone in, and I was in the chat earlier today, and I wanted it to give me a description of the person in the picture and it said, oh no, I couldn't do that. And I said, why not? And it said, well, you know, we don't want to offend anybody and, you know, by describing them. And I thought, oh, ChatGPT. So then I went over to the bad kid. I went over to grok, and it gave me like a five paragraph description of the picture. And it's just interesting because the, you know, the personality. So the personality.
[00:40:23] Guest: Joyce Li: Of the teenager.
[00:40:24] Host 2: Glenn Hopper: Yes, exactly.
[00:40:26] Host 1: Paul Barnhurst: Yeah. You get a lot. They have no problem. The filters are I'll tell you what I think.
[00:40:30] Host 2: Glenn Hopper: Yeah. But that's I mean, with these eyes the personality is based on the safety team or whatever guardrails they put on them. Because if you think about it, these things, having read the whole internet, they could get really weird based on the dark corners of the internet. So it's all about the sort of the management and what they decide. So that's a whole we could do a whole different episode on, on the guardrails and, and you know what these how these respond and, and all that. But I will say Paul 4.5. You know, however many tens of millions of dollars it cost to train and that it was kind of a bust that, you know, originally that was going to be ChatGPT five. And then it came out and they were like, well, I don't know. It's kind of they said, it's got a little spoiled.
[00:41:11] Guest: Joyce Li: We're so spoiled. Right?
[00:41:14] Host 2: Glenn Hopper: So but I will say the questions from 4.5, this whole list, it's among the better questions that we've gotten. So, you know, for the $100 million or whatever it costs to train it, now we have slightly better interview questions to do. That's all the answers, Glenn. Come on.
[00:41:32] Host 1: Paul Barnhurst: Let them Rest go.
[00:41:33] Host 2: Glenn Hopper: Yeah, yeah. So Paul and I take a different approach here. And I always figure since the AI is the one that created the questions, I'll just let it pick a random one to ask for us. So the question is there's no human in the loop on this. You just get the AI to answer it. And the question that it came up with for mine is do you think I will ever be able to replace human intuition in investing? And why or why not?
[00:42:01] Guest: Joyce Li: Well, first of all, we need to understand why is the role of human intuition in investments. A lot of it, we say, is intuition, but it's actually pattern recognition or pattern recognition and also some of the. Yeah. So in that sense I would say intuition. If we define intuition that way, then I can definitely replace a big part of it. But what is not. I would say today's AI again, today's AI is not ready in doing that, because I do think that requires a lot of specific domain post training. And and I don't see, at least to my knowledge, I don't see a lot of successful domain specific post training on gen AI yet for this purpose.
[00:42:49] Host 2: Glenn Hopper: Maybe that's our next business. Maybe we go into that one together, Build our own.
[00:42:55] Guest: Joyce Li: Open to that. Yeah.
[00:42:58] Host 1: Paul Barnhurst: All right. You appreciate the answer. And we loved having you on the show, Joyce. It's been great chatting with you. I have a feeling we could probably go for a couple hours, but our guests are. Our listeners might get a little bored with that. But thank you so much for joining us. We'll make sure to put your contact information in the show notes. And it was a real pleasure chatting with you. So thank you for carving out some time for us.
[00:43:20] Host 2: Glenn Hopper: Thank you Joyce.
[00:43:21] Guest: Joyce Li: Thank you very much. It's been a pleasure.
[00:43:24] Host 1: Paul Barnhurst: Thanks for listening to the Future Finance Show. And thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice and may your robot overlords be with you.