AI in Financial Modeling for Analysts to Improve Accuracy and Speed with David Ingraham

In this episode of Future Finance, Paul Barnhurst and Glenn Hopper are joined by David Ingraham to discuss how AI is transforming the world of finance, particularly in Excel, and why it's essential to treat AI as a partner rather than a tool to simply execute commands. David, the CEO and founder of HyperPerfect, shares his insights on the challenges and opportunities in the AI-finance intersection, including the importance of context and understanding AI’s limitations.

David Ingraham is the CEO and founder of HyperPerfect, a financial reporting and accounting platform that integrates powerful AI directly into Excel. With nearly 20 years of experience in private equity, David has worked on deals totaling over $1.5 billion. He is also passionate about educational initiatives, serving as board president of Aim High, a Bay Area nonprofit that provides free summer education to nearly 2,000 students annually.

In this episode, you will discover:

  • Why finance professionals are slow to adopt AI despite its potential

  • How AI should be treated as a teammate, not a replacement

  • Why context is everything in AI models, especially in Excel

  • The limitations of AI in finance and how to work around them

David emphasizes that AI isn’t a shortcut; it’s most effective when used incrementally, with human oversight to ensure accurate results. He also discusses how AI can significantly increase productivity, with an efficiency gain of 15 to 20 times if used properly.

Follow David:
LinkedIn: https://www.linkedin.com/in/dsingraham/
Website: https://www.hyperperfect.com

Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii

Follow Paul
:
LinkedIn -  https://www.linkedin.com/in/thefpandaguy

Follow QFlow.AI:
Website - https://bit.ly/4i1Ekjg

Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.

Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.

In Today’s Episode:
[00:00] – Trailer
[02:35] – David Ingraham Background
[04:02] – AI in Finance & Accounting for Dummies
[05:57] – AI as a Teammate, Not a Threat
[07:09] – AI as a Thought Partner
[10:01] – Why Finance Teams Struggle with AI
[11:51] – Evolution of FP&A
[14:30] – RevOps & Finance Alignment
[16:41] – AI Use Cases in Finance
[19:20] – Agentic AI & Future Direction
[22:31] – Process, Data & AI Limitations
[28:52] – AI Questions & Closing

Full Show Transcript:

Co-Host: Glenn Hopper (00:41):

Welcome to Future Finance. I am Glenn Hopper, along with my colleague, the fp and a guy Paul Barnhurst. Paul, how are you doing?

Host: Paul Barnhurst (00:50):

I'm doing great. And yourself, Glenn?

Co-Host: Glenn Hopper (00:52):

I'm doing good. It's been another one of those back to back meetings days, but this today I got to get in the car and actually go to an in-person meeting. So

Host: Paul Barnhurst (00:58):

You met actual people, not just talk to him online.

Co-Host: Glenn Hopper (01:02):

You want to tell everybody about our guest today?

Host: Paul Barnhurst (01:04):

I do. So we have here with us David. David, welcome to the show.

Guest: David Ingraham (01:07):

Thank you.

Host: Paul Barnhurst (01:08):

So lemme give a little bit about our guest background. So David Ingram is the CEO and founder of HyperPerfect. He's a graduate of uc, Berkeley and he has spent much of his career nearly 20 years. He worked in private equity working on deals totaling well over a billion dollars. He also founded facta, which is an AI powered financial reporting and accounting platform for investors in SaaS companies. Today he's been focused on his new project, new company, HyperPerfect, which embeds Claude and other best in class AI models directly inside Excel. He also serves as board president of Aim High, a Bay Area nonprofit providing free summer education to nearly 2000 students a year. David's PE experience is what pulled him into becoming a software founder. Every deal started the same way. Someone buried in Excel building models, scrubbing data and hoping the formula is held up. Glenn, does that sound familiar?

Co-Host: Glenn Hopper (02:07):

Yes. It as former CFO and PE, back to companies 100% that sounds,

Host: Paul Barnhurst (02:11):

I think we could all relate to that one. So he realised professionals need the flexibility, Excel, but want easier, more reliable execution. So he built HyperPerfect as he puts it. No context switching, no copying into chat GPT. Just talk to your data in plain English right where you already work. So again David, welcome to the chef.

Guest: David Ingraham (02:30):

Thank you. I'm not sure where you got all the bio information, but it was very nice, very nice background, so thank you.

Host: Paul Barnhurst (02:36):

I will admit I got it online LinkedIn, and then I had Claude modify it and I cleaned it up a little bit.

Guest: David Ingraham (02:44):

Good for you.

Host: Paul Barnhurst (02:45):

A mix of AI and your LinkedIn profile primarily.

Co-Host: Glenn Hopper (02:48):

That's kind of every workflow that we do now, right? Is just some ai, some web search and then some human judgement is that

Host: Paul Barnhurst (02:56):

There's more truth to that than we want to admit Glenn.

Co-Host: Glenn Hopper (02:59):

Yeah, hopefully it didn't hallucinate anything like say Berkeley instead of UCLA or anything crazy like that.

Guest: David Ingraham (03:06):

No, it was actually pretty accurate.

Co-Host: Glenn Hopper (03:07):

Yeah. All right, there you go. AI for the win. So I am interested, I mean this suddenly everybody's trying to get into Excel right now and as Paul always said, I thought Excel was dead, but I think your model's a little bit different. Can you walk us through, before we get into any of the other questions, just explain HyperPerfect is how it works and where it kind of sits in this Excel plugin battle right now?

Guest: David Ingraham (03:31):

I appreciate that question. Yeah, well just to give you quick background, so about a year ago I was starting to play with AI and just teaching myself how to vibe code and things like that just to get immersed in the sector because I saw the models were starting to perform, I quickly realised that this could be used to interact with Excel because Excel has offer, like Google Sheets offers an API that you can interact with the sheets with, and at the time everybody was kind of just uploading files into chat GPT and asking questions about the file. And my feeling right away was like, if you can't interact with the file, it was not the type of workflow that was going to work for most people. And I think over time what I found as I got to really immerse myself in the technology is it's impossible to use excel effectively with AI if you're not able to work very incrementally.

(04:31):

And so I think building that is in that environment for me was key. And now you've got a lot of other tools that are offered in that environment. So now the question is how do you distinguish yourself? And for us, I think that's two different things. One is the two big competitors obviously are Claude and then copilot. So I think there's two ways that we are really focused on becoming different. One is I just think about it, raw horsepower, super important and it's interesting, I'm curious what you guys have been seeing, but what I've seen recently is up until the last open AI model came out, Claude was really far and ahead the best model for everything relating to Excel. I've been testing the last open AI model and all of a sudden it's significantly better than Claude is. And so one of the great advantages that the independent providers have is you can switch back and forth between whatever model is the best model and that putting the right tool for the right job I think is something that we're definitely focused on so that we can provide the most amount of value for the least amount of money, which is I think quickly becoming the biggest issue with ai.

(05:53):

It is actually very expensive. I've been watching Paul's tests with all these different providers. The reality is anybody could win those modelling tests if they just turn up all the knobs and spend a tonne of tokens to be able to deliver the answer. The long-term issue is going to be who can do that for the least amount of tokens and make it very efficient. So just being able to have flexibility and horsepower is one. The thing I'm noticing the most in the finance world, having one foot in engineering and one foot in finance finance is like a year and a half, two years behind all the engineering in terms of the sophistication of using ai. So I'm still seeing a lot of people open up Excel, Hey Claude, build me a model. And then they're disappointed when the balance sheet doesn't balance in year five.

(06:42):

Well, you've missed 50 steps in between where you started and where you want to get to. I think being able to hold users' hands in terms of building out in particular context is everything. Being able to support them and to teach them how to use these tools effectively, those are providing that support is I think job number one for everyone right now because I think the learning curve is so steep that people are having a hard time just still understanding what's all the functionality I need to understand to be able to make this work well for me.

Co-Host: Glenn Hopper (07:18):

Yeah, I think that's a really important point that you brought up because we've moved on with more agentic capabilities in the frontier models. We've moved on from really having to do chain of thought prompting, but I think that the big gap that people have to bridge across is assuming that the model or whatever AI tool you're using has the same context you do when you haven't given anything. That's like hiring a new employee and just telling them anyone can build a DCF model, but with the specifics of these companies we're evaluating or whatever the case is, you can't just assume they have that context. So I really like what you said about the incremental steps in building this though, because if you're already in the interface in the surface where you're going to be working in Excel and you say build a DCF model, it could build you up a template or whatever and then you can add the context incrementally.

(08:16):

But if you're, if you're throwing it on the web-based chatbots, it's like okay, it's going to build this whole model and then you've got to go through it now, created kind of two work services there. So if you can integrate the two, that makes a lot of sense. So how does this experience, is it pretty much the same as if you use, we won't talk about results if Paul knows, I love to dig on copilot here, but I mean is it the same interface as if you use the chat GPT plugin or the Claude plugin or copilot where it's just a sidebar, but in yours they get to do model selection or how does the user interact with it?

Guest: David Ingraham (08:53):

I mean the initial experiences is very similar. I mean all of these tools are going to be always fairly similar just given the limitations that you have in Excel for what you can do with a chat window. But I think where we're taking it is what's different. So I really do believe that context is everything in AI and I don't think enough people really appreciate what that is. You nailed it with your description of someone coming just out of school. If you hired someone just out of undergrad and said, build me a model with these financials, what do you think the results are going to be? And it's not that you can't assume that they're going to know it, you can assume that they won't know it. And that is something that people don't understand about AI is when you press the return button and make a prompt, it has literally no information in its head until you press that button.

(09:51):

And the only reason it has a little bit more information on the second time you press the button is because the software provider in the background is giving it the first part of the conversation again. So every time it's a blank slate and that's where the opportunity is to build that context for the individual. So to answer your question in terms of what's different about it, we are solving two problems at the same time. One is you need to provide it way more information than what people are doing today. And two is you need to protect the very finite amount of memory than it has. So Claude just went from 200,000 tokens to a million tokens, but the reality is the more information you put into that context window, the less the performance goes down considerably. And then the other thing that I'm not sure people are really aware of about Excel in particular is it is extremely easy to blow out the context window in Excel.

(10:52):

There's so much metadata. Well first of all, there's just so much data in Excel files. There are thousands of rows, often hundreds of columns. That alone would pretty quickly blow out a context window. But then you start laying in things like formatting metadata. I've done some tests where a five year monthly model that's 15 rows long, if you put all the information in there and send it to the lm, you've already filled up a 200,000 token context window. So you've got to start quickly thinking about how are you going to take a reasonable size model and shrink it down into serve that information to the LLM in a way that is actually very useful for it. And still today, I think almost all of the tools that I've seen, if you have a thousand rows of the same formula, it's going to send a thousand copies of that same formula to the LLM and that just isn't an efficient way to do that.

(11:51):

So we've got to do a much better job being able to compact things down to what the LM needs to know, which is there's a formula here and it goes down a thousand rows. And so we've already built that infrastructure into ours and that's why I think that when I test our product against Claude and copilot, I'm seeing performance that's better and I think it's because we've spent the time to do those types of things. But the long-term opportunity is to build the bigger context around all of this information so that every time you press a request to build the next 10 lines of this model or whatever, there's actually quite a lot of information being sent over. How does Dave like to build his models? What are the items that the balance sheet is going to need to have for this particular company? Because we've been working on this company for two years.

(12:42):

Dave works at an investment bank. They always use the same LBO model for when they're working with private equity clients. Clients, well where is that LBO model? Go find the template. And for these 10 lines of the model that we're working on, what does this company expect it to look like? These are all things that have not been built at all by any of these companies. And I think the ones that have made the furthest progress in particular Claude, it's really like Claude code is built for coders, it's built for huge file repositories. That is not how financial professionals work. Financial professionals might have a file directory, but they also have emails, they've got calls and notes from calls, they've got telephone conversations. So what you need to be able to put into that workspace is much different than what is being offered today. And that's all of the kinds of information that over time we want to build into our product. So

Host: Paul Barnhurst (13:40):

It feels like a lot of it around context and how you manage that context window and some of those type of things. Which leads me to another question here. Obviously you mentioned chat, GPT has got a lot better. Claude has made a huge leap forward a few weeks ago, given where we're at right now, what are the things that you feel like agents are good for and what are things that maybe people should not be trying to use agents for? Because I think a lot of people are struggling, they see this promise that it can build anything and oh, you just got to get your prompt right? And there's still a level of human judgement here. I don't think anyone's at the point where it's like, yeah, just let AI do everything. So what's advice should give to finance people? How should they be thinking about using agents today?

Guest: David Ingraham (14:25):

Not that AI is good at better at some things than other things, but if you're not treating it as a partner, you're not using it correctly. If you just expect it to come back after one or two prompts and give you a perfect answer, that's not how AI works. When AI works really well, you start with a planning process and you come up with a plan with the AI conversation back and forth. And one thing you'll be surprised if you start doing that and you haven't done it before is it's actually really good at planning. In a lot of cases it will come up with ideas that you didn't think of, but probably more importantly, I mean that's great, but more importantly, just going through that planning process like you would with a first year financial analyst, it starts surfacing things that you're, no, I don't want to do it that way.

(15:18):

I want to do it this way. And so you come up with that plan, then you execute on that plan. You don't have to build a three statement model right in one shot. You build it in manageable segments because what you'll find is the bigger the chunk of work you take, it will perform much more poorly. If you have it building just the revenue section of an income statement relative to building a three statement model in one shot, it will do it a million times better on that revenue section than it would if it's just part of the entire model. Plus, as you're doing that, you can watch it to make sure that it's doing exactly what you want. That's the thing that people aren't getting is it's not this magical do it and it's done. We're partnering together and you're going to help me do the hard parts, but I'm going to be on top of every step that you do.

(16:14):

Every time that we get to the end of this one step of the bigger project that we're working on, I'm actually going to read everything you did and make sure you did it the way that I want you to do. I also think that's the disconnect that people think that AI is going to just remove everyone's jobs. I don't think that's true. It's never going to stop hallucinating the way that it's constructed today, and you have to have a human there understanding it so that you can communicate that to other people in your organisation and to be accountable to the organisation as well because AI is not accountable, it's just it's a computer. So I just think that people need to, you

Host: Paul Barnhurst (16:51):

Need that fire ai, I guess you can, but it's not quite the same.

Guest: David Ingraham (16:55):

You can change AI providers, but it doesn't care that you've changed AI providers. So I really think people have to very much change how they work with ai. And if you look at how engineers work, that's how they do it. I mean, they're constantly writing tests to make sure that the code that AI wrote passes the test. And I think that in finance, people don't do that. They just, oh, the balance sheet didn't bounce, so it failed. No get it to go rebuild the, I haven't rebuild the balance sheet three times before I even look at the balance sheet. Why would I look at it the first time? It's got two more attempts to get it better. It doesn't cost me much except for a few tokens. So I really think that people need to learn different techniques and how they do it and they'll see much better results.

Host: Paul Barnhurst (17:39):

Ever feel like you go to market teams and finance and speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Meet QFlow.ai, the strategic finance platform purpose-built to solve the toughest part of planning and analysis of B2B revenue. Q flow quickly integrates key data from your go-to-market stack and accounting platform, then handles all the data prep and normalization. Under the hood, it automatically assembles your go-to-market stats, makes segmented scenario planning a breeze, and closes the planning loop. Create air-tight alignment, improve decision latency, and ensure accountability across the team.

Co-Host: Glenn Hopper (18:46):

I definitely have a follow up question to that, but I'm going to interject here talking about firing ai. I'm shutting down my open claw server. I've been playing around with it for several weeks and then I kind of figured out that I can do all of what everything I was doing there basically within cloud code and Claude Cowork, but when I was shutting down my open claw this morning, I stopped myself. I didn't do it, but I almost sent a farewell message. I was like, you're losing your mind. So anyway, that's my aside my all

Host: Paul Barnhurst (19:15):

Fired it today informally, not just

Co-Host: Glenn Hopper (19:18):

Informally

Host: Paul Barnhurst (19:19):

By cancelling.

Co-Host: Glenn Hopper (19:19):

That's right, I need that. I didn't even go through hr, I just kicked it out again. I think you're nailing what a lot of people miss. I mentioned chain of thought prompting before and what happens is if you're willing to accept sloppy work just in the past few months, the horizon over which AI can go out and work without coming back to you to ask questions has gotten exponentially longer than it was even four or six months ago. But to your point, if it goes off and it's doing its own thing and you're not watching and then you just get the magic return step back at the end, then you would never build a model you would first, whatever the main drivers are, you would look at those first and if you're going start with the income statement and going down, you would start with your revenue and then look at your cogs go through like that.

(20:16):

So to ask the model to in one shot, go consider everything, it's not going to pay as much attention I guess to each of the subsequent steps. But if you do it like you're not handing this off and going to play golf and then coming back to get the perfect model, but if you're a thought partner and a coworker with it, it could be better and much faster than if you had to build the model yourself. So I think that's a very important point that you just made about maybe it's chain of thought. Again, it's just that the link's in the chain get a little bit longer, but you still don't overshoot and try to boil the ocean all at once is what I'm hearing in that.

Guest: David Ingraham (20:55):

Absolutely, and I do think because the models are more powerful and they can take on more, it's a slipper slope. It's intriguing to be able to one shot a model and look, if you want to work in that size chunk, you can do it, but don't be disappointed when you don't like the revenue set the way the revenue section was built because you didn't tell it how to build the revenue section. You were just kind of going for it and hoping it was going to do a perfect job. Well, it's not. It doesn't know how you want it to work, so get it, plan with it, go through it incrementally, have it check its own work three times before you actually look at anything. And then when you find things that aren't going to be perfect for what you want, which is inevitable, by the way, have it correct the problem, don't correct it yourself.

(21:45):

That's what I see the advantage of the power and the models today. They can make amazing corrections. Now you can have allocation issues and it will not only find the allocation issues now, it will correct the allocation issues and that's super impressive. And by the way, it does save if you're using AI correctly, my best guess on my own productivity efficiency is 15 to 20 x improvement in terms of how quickly I do things. Now, I just did a project for my company that in my last company it took over, it was about a month and a half and we paid a consultant $20,000 to do it. I did it in a day. I'm getting that routinely, that efficiency gain, and I definitely see that in Excel as well. So if you're not getting 15 to 20 x, you should start doing some more research and reading and figuring out what are the techniques I could be doing to get that

Co-Host: Glenn Hopper (22:39):

Paul, is it just me or does the steps of going through building a model with ai, it felt very analogous to raising children. You give it small incremental steps, you let it go and work those out and you say, no, no, honey, you didn't quite get that right. Go try again and it will correct its own. I don't know. To me that sounded very similar to my parenting

Host: Paul Barnhurst (23:00):

Style. Sorry, I can't relate at all. I just told my daughter to get it done and she does it and you have to give her steps. Obviously that was tongue in cheek. No, I definitely think there's some analogies there. And so we've talked a lot about that whole, you need to spend more time with the model. What should finance professionals be doing to really step up, so to speak here? Is it more of just learning how to better prompt and give context training you think they should be taking? How do they really get themselves ready to get that 1520 x? What advice would you offer them?

Guest: David Ingraham (23:38):

One piece of advice that I would offer is look what the engineers are doing because like I said before, they're a year and a half ahead and they've gotten extremely sophisticated in how they use AI to code. And it is very analogous to financial modelling. And there's some things that are different too, but for instance, learning how to use subagents when you write code. Now, if you're kind of at the forefront of ai, you don't just chat with one agent and say, Hey, write this file of code for me. You send out 10 subagents, one's going to help you with the planning. One's going to check the agent that created the plan to make sure that they can agree on a common plan. Another one's going to go do execution on part one, another one's going to do execution on part two. Then you're going to send three others who are each experts in their particular field to go check all of the code.

(24:37):

That's how people should be modelling today if you don't have access to subagents in a lot of these tools. So you can't do that yet, but those are the kind of techniques that people are using, and I can guarantee you that if you're sending in 20 subagents to go build an Excel model, you're going to get a better result than if you sit there and try to do it with one agent. So go look at what the engineers are doing, how are they pushing the envelope in terms of getting performance and see if you can transition or translate that into working with Excel, which I think nine times out of 10 you'll find that you can

Co-Host: Glenn Hopper (25:16):

Thinking like an engineer. I've never worked as an engineer, but I've worked with a lot of engineers before and I think the fact that I was nerdy enough to talk to them about software products we were building and projects that we had going on, you start to absorb some of that and it does apply. And truthfully, if you're not using ai, it's that same mindset if you are just building a model the old fashioned way in Excel. So I think understanding what AI does, what its limitations are, and then taking that architect approach, but then, or maybe not the architect approach it, it's the general contractor approach where here's the Gantt chart, these are the things we're going to do and these are the specialists that are going to do it, and this is the sequence they run in. That feels like the new skill.

(26:01):

If it's not so much about your great Excel formulas and skills in there, it's about managing your team of bots that can get the same outcome. Who cares what Paul, cover your ears. I was going to say, who cares what formulas you use, but I think your understanding and approach to AI is something that we all just have to start under just shift our mindset. And on that, you recently posted about the difference between deterministic and probabilistic outcomes and the place for both of 'em. And I think that's an important thing to understand because as AI has just swept through the zeitgeist and everything we do, it's people who've never thought of AI before and never didn't think about spam filters and regression and classification and clustering and everything that AI did years ago. AI is just a big blanket term and it all acts the same. So when they say, oh, AI is not very good at numbers or whatever, AI hallucinates, it's like, well, generative AI does x, y, Z, but that's different than machine learning and other types of ai. So I think though it tells a little bit about what you had to say about that and how that applies to HyperPerfect, because I think that's a perfect example, and I'm going to stop taking words out of your mouth from the answer, but I'd love to hear you expand on that a bit.

Guest: David Ingraham (27:27):

Yeah, so deterministic and probabilistic are engineering concepts, and deterministic just means you can with 100% accuracy to predict what the result's going to be of something that is not what AI is. If you ask to do the same thing 10 times, you might get 10 different answers and it will always be that. One of the great things about AI though, and I think where the future is certainly for our product, is you can give AI access to tools that are deterministic. So let's say you work in the SaaS industry and you calculate SaaS metrics for your company or for companies that you can solve for. There's a bazillion different ways to calculate those things. You can create a tool that every time you give it data is going to do the exact same calculations. So don't ask the AI to build that analysis for you because you're going to get 10 different answers every time if you do it 10 different times.

(28:23):

If you ask it to input the data into a tool that is deterministic and it's going to spit out exactly the same analysis every time, it's very good at using those. That is where I think we want to go is to build a lot of those deterministic tools to build those maybe even on a custom basis for users so that if I don't know they're doing a certain accounting analysis or they want to fill out a very specific financial model every time, you don't actually have the AI inputting the data into the model. You have it turning the knobs on a tool that is going to put the information into the model the same way every time, and that is a limitless opportunity. You could spend years. I imagine there's going to be marketplaces of these tools, but I definitely think that's when I talk about deterministic and probabilistic, that's what I mean.

Host: Paul Barnhurst (29:16):

Well, I mean we're definitely seeing a lot of that now, right? There's a reason they added coding into all these tools because if you write something in Python, the answer is deterministic. It's just going to run that and spit out the answer exactly based on the variables in the code. Whereas if you ask generative AI to do the same thing without using Python, who knows what you're going to get? It's kind of whack-a-mole bingo so to speak.

Guest: David Ingraham (29:39):

But you can write Python programmes, tell the AI where the programme is and how to use it, and then all of a sudden you've got a deterministic outcome from ai, which that is where I think it gets super powerful.

Host: Paul Barnhurst (29:51):

Agree. I mean, the amount of coding, the stuff you can code now is deterministic when you're done using gen AI that the average person you couldn't do before. I used it to do a tonne of code for my website recently and three years ago I would've never been able to do that two years ago. I mean, I still had to do a lot of editing and clean up, but it pretty much did 99, pretty much all the code. I just had to clean up text and things like that.

Guest: David Ingraham (30:16):

Same thing with very basic Excel functionality, like I was talking about the don't read the same formula a hundred times because you're going to fill up your context window, which isn't going to help anything. You can build a lot of different, and we have deterministic tools to say, okay, first of all, figure out what this is. If it's a profit and loss statement, attack it with this tool that we've built you and try to figure out what timeframe are we talking about? What are the different revenue accounts and the cost of good SIL accounts. So we've started to give the AI a bunch of those tools that are specifically built for financial scenarios so that it has the ability to move quicker but also in a more defined way.

Host: Paul Barnhurst (30:59):

I think that's a great point to move into our next section. I think we've had a good conversation here on Excel, and the thing I'll just remind people is, look, we're all learning AI together. It's a journey. The key is you keep going, you find training, you find resources like you mentioned, look to engineering, learn about chain of thought reasoning and don't just close it because, well, I wrote me a model and then the model sucked. Well, if you went to, like you said, go to your junior analyst and say, build me a model with no context, expect the model to suck AI isn't any different. So here's how our next section works As we move on, we give AI your bio, the questions from today, the internet, your LinkedIn profile, and tell it to come up with 25 unique kind of personal and a little quirky because we want to have a little fun questions. So Glenn and I each take a different approach. I give you two options to see which question you have to answer. We can use the random number generator and remove the human from the loop, or you can pick the number between one and 25, and I'll read you that question. So which ones do you want? Random number generator or pick the number yourself.

Guest: David Ingraham (32:11):

I love probability. So random number generator.

Host: Paul Barnhurst (32:13):

All right, here we go. Let's see what it gives us. Ah, came up with one. I don't think it's ever given me one before.

Guest: David Ingraham (32:21):

It sounds fishy already.

Host: Paul Barnhurst (32:23):

I agree. I could run it again. I mean, if you're one of those guys that want to let a

Guest: David Ingraham (32:27):

Vocal

Host: Paul Barnhurst (32:27):

Time, no,

Guest: David Ingraham (32:28):

Let's hear the question first and then when they answer that,

Host: Paul Barnhurst (32:31):

And this is Google Gemini I used this week.

Guest: David Ingraham (32:34):

Okay, good. It's

Host: Paul Barnhurst (32:35):

Been a little while since I used them. This is under the section titled the PE Veteran and Excel Trauma. There's five questions under this section. So number one says you spent 20 years in private equity and worked on over 1 billion in deals in all those years, what is the single most cringe-worthy, and it did put that in quotes. Cringeworthy formula are circular reference error you ever found in a high stakes model.

Guest: David Ingraham (32:59):

I think this is answering the question, but it's a little different. I don't remember what the formula was, but we were looking at a deal once. One of the funds that I worked with, we invested a lot on data centres, which are actually big today now with ai, but they were very large dollar investments because they're so expensive to build out. So the numbers were sometimes pretty big. And I got the investment bank, you go through investment banking process, you get the model and everything from the investment bankers and I once found it was probably a two or $300 million deal million. There was a $50 million modelling mistake. And so there was basically you had to put 50 million more capital than they were telling you into this deal to get the results that they were telling you you're going to get. That was crazy.

Co-Host: Glenn Hopper (33:48):

That's not a small number.

Host: Paul Barnhurst (33:50):

What's 50 million among friends? Come on guys.

Guest: David Ingraham (33:53):

It was very significant and they probably sent that same model out to 50 other private equity funds. That by the way, is something that AI would've found that mistake very easily.

Co-Host: Glenn Hopper (34:04):

Great point. And that's again thought partner, right? Okay, so my approach is I figure AI generated the questions, let AI pick what the question is and if it picks number 25, we're going to know there's some kind of weird alpha and omega thing going on here. Number six. Okay, number six is HyperPerfect. Let's use those talk to their data in plain English. If you could use that same technology to talk to your coffee machine or your car, what would you first prompt? That's not a bad question.

Host: Paul Barnhurst (34:40):

What's the first prompt going to be to that coffee machine or car?

Guest: David Ingraham (34:43):

Well, I actually just bought a new espresso maker two days ago, so I'm going to go with coffee machine. I mean, I would just give it all the information of what I want it to do, and I'm just coming up to speed on this. I'm not one of those craziest espresso guys, but I'm trying to make a decent espresso. So first of all, turn on at 7:00 AM get up to, I think it's like nine bars of pressure. You're supposed to get up to set the dial on the grinder to get to that nine bars of pressure. I mean, it would just be back to context, giving it all the context of what I think needs to go in and not just what I want out, because I don't think that that's, we're quite at the point yet where you can do that. I think you got to tell it how you want to get there to get the result that you want.

Host: Paul Barnhurst (35:29):

You're going to give it instructions, not an output of I want a really good tasting espresso.

Guest: David Ingraham (35:33):

Well, I should revise that a little bit. I first would tell it what the ultimate goal is. Then I would ask it, how are you going to get to that goal given that you've got up to nine bars of pressure that you can use and all those other things. The other thing I would do is, and this is another tidbit that people should be getting from engineering, don't be typing to your ai. You should have voice recognition software. If you don't already use it, you should go right out today and get it because that really opens a door that allows you to give AI way more context. So I'd be talking to my espresso maker. I wouldn't be typing to,

Co-Host: Glenn Hopper (36:09):

I love that answer because I got one of those. Breville makes all the drinks things for Christmas and the first three days fiddling around with that grinder to get it to make the right, it's supposed to start brewing and coming out at eight seconds and at 25 seconds. It was maddening and I would've loved to have just had a bot and say, go fix this. Why doesn't it come from the factory like this? Doing all that tuning? So that's ai. There's your challenge. Make the perfect cup of coffee.

Host: Paul Barnhurst (36:40):

You both realise it's only going to be a few years till we start to see appliances with all that. Oh,

Guest: David Ingraham (36:45):

I'm excited. It'll make my life a lot easier if 15 more expresses for the same amount of work. I think that would be great.

Host: Paul Barnhurst (36:52):

Love it. Alright, well thank you so much for joining us, David. We had a lot of fun with this conversation. I appreciate you carving out some time for us.

Guest: David Ingraham (36:59):

It was a pleasure to talk to you guys.

Host: Paul Barnhurst (37:01):

Thanks for listening to the Future Finance Show. And thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice, and may your robot overlords be with you.

Next
Next

How Finance Teams Can Overcome AI Fear and Build Real Use Cases That Work with Josh Schauer