AI in Finance for Professionals Dealing with Errors, Overhype, and Constant Learning Pressure

In this episode of The Mod Squad, Paul Barnhurst, Ian Schnoor, and Giles Male share their real experiences using AI in financial modelling. They cut through the hype and discuss what AI actually does well, where it still struggles, and how professionals should be thinking about using it today. From building models to handling workflows, the conversation highlights both the value and the limitations of AI tools in
real work. 

Expect to Learn

  • Where AI actually helps in finance and modelling work

  • Why most “one-click solution” claims are unrealistic

  • The importance of checking and guiding AI outputs

  • How instructions and structure improve AI results

Here are a few quotes from the episode:

  • “It’s not a one-click solution. You still have to check everything.” – Giles Male

  • “You will have to understand every line to guide AI properly.” – Ian Schnoor

AI is powerful, but it’s not a shortcut to good work. It still needs guidance, structure, and strong fundamentals. The people who benefit most will be the ones who understand both the tools and the work behind them. For now, the best approach is simple: use it, test it, and don’t trust it blindly.

Follow Ian:
LinkedIn - https://www.linkedin.com/in/ianschnoor/

Follow Giles:
LinkedIn -  https://www.linkedin.com/in/giles-male-30643b15/

In today’s episode:
[00:05] – Trailer
[02:06] – Current thoughts on AI after recent progress
[03:45] – Daily use of AI and time savings
[05:00] – Mental fatigue and keeping up with AI changes
[08:03] – Calling out AI hype and unrealistic claims
[12:30] – AI training challenges and business demand
[17:57] – “Eye of the storm” phase of AI development
[24:13] – Testing AI-built financial models
[30:52] – Why fixing AI models can take longer than building from scratch
[35:15] – Responsibility to challenge misleading AI claims
[38:41] – Using instructions to improve AI output
[42:59] – Final thoughts on AI, stress, and the future

Full Show Transcript


Intro (00:05):

The Mod Squad. We're the ModSquad. The ModSquad features Ian Schnoor, executive director of Financial Modelling Institute, Giles Male, humble MVP and co-founder of Full Stack Modeller and Paul Barnhurst, the FP&A guy.

Host: Paul Barnhurst (00:31):

Welcome to another episode of the ModSquad. This week on Thrilled to be joined by Giles Male Giles, how are you doing?

Co-Host 1: Giles Male (00:40):

I'm good, Paul. It feels like yet again, the three of us haven't seen each other for a long time. It probably has been a long time.

Host: Paul Barnhurst (00:47):

It feels like it's been about the longest since we started doing this. It's been a while. So good to have you back and in. Great to have you back with us.

Co-Host 2: Ian Schnoor  (00:55):

Nice to see you guys as well. Time is being stretched out in the world of AI, isn't it? Because things when your life changes so dramatically every day. It does feel like it's been quite a while. It's been a month probably, but it is good to see you both again.

Host: Paul Barnhurst (01:10):

Yeah, exciting to be here. So we thought for this episode today, we're going to first start by just sharing a little of what we've been up to. We've all been exploring and working with AI in different ways and our last big episode we had in Bennet on, and then before that we tested Claude. So I think we've now all had some time to kind of see how things have shaken out since we saw that huge leap, right? If we remember when we tested Claude, it was like, holy cow, it's actually good now versus, okay, it's good here. It's okay here. It has a lot of struggles. It was like, okay, this is close, this is usable. So I think it kind of hit what we call a watershed moment. And now that we've had some time, some perspective, some testing to first just get thoughts on where AI is at in general. If any of your positions have changed, then we'll go a little bit of what you've been up to. So maybe if we'll start with you, what's your thoughts now as you've had some time to reflect on all this craziness?

Co-Host 2: Ian Schnoor  (02:06):

Well, gosh, I mean I am loving my usage of ai. I'm finding it extremely valuable, satisfying. I'm mostly using Claude for a paid subscription. I bounce ideas off it. I use it for agentic purposes, both in my personal life and for work. I am talking finance building models with it. I'm getting it to create, but in the Genta capability we'll discuss something and then I will get it to, it'll create a full summary document report and sort of mind bogglingly quickly. It's been helpful. Workwise with the Global Leader Council, which you're both a part of. I mean it can do things that would otherwise have taken days, weeks as we complete our first survey, we're going to be receiving survey responses, extensive survey responses on dozens of questions by 60 people. There will be thousands and thousands of data points. In the old days, you'd sit in a boardroom for days or weeks or you'd spend $20,000 and have a marketing firm go through it and comb through it and look for trends and themes by region, by country, and then create a report. Well, we're going to get that report built in a matter of hours or at least a first graph. We'll get a first cut and we'll look at it. So there's some amazing things that you can do much more quickly. So I'm really thrilled, but again, it's augmentation. I'm using it to use the human brain and human insights and human ideas and then speed things up by getting some assistance with the AI. So that's how I'm using it quite a lot in a little bit of everything and enjoying what it's able to deliver.

Co-Host 1: Giles Male (03:45):

Giles? Yeah, very similar. I think last time we talked a lot about Claude. I said I was becoming more of just a user rather than somebody involved in training or anything like that, and it's just carried on. I use claws every day. I'm using it to help with modelling tasks, training content. It's still never doing everything. And this is something that, as you know, the hype really annoys me. It's not a one click solution for anything, so I still have to check everything. That work comes out crude, but it's still saving me an incredible amount of time. I am also really enjoying the process. I'm also slightly nervous as things continue to evolve about just what everybody's going to be doing in a year or two, trying to stop myself from thinking too far ahead. I think it's just so impossible to even imagine what the world's going to be like in a year or two. And I'm trying not to get too wrapped up in that side of things. Maybe it'll be fine. Maybe the world will look very, very similar in a year or two, but it's just crazy what this stuff can do now. But

Co-Host 2: Ian Schnoor  (04:59):

Talk

(05:00):

Where you're going with that. I mean, I'm finding the same thing. Instead of me spending a few days summarising results from websites or research or submissions, I'm interacting with Claude to do it. So it's not like I'm not doing it. It's a little faster, but it's doing some of the heavy manual tedious work you're finding. But anyone who claims online, oh yeah, you can get all your work done in five minutes. It's nonsense. And not only, but we were talking about this before we started recording, right? Ja is that there's a bit of a, and we're seeing this, I think both of us live and reading about this, a bit of a mental taxation. There's a different exhaustion element somehow. When you do something yourself, if you roll up your sleeves, you get it done. But there's been a different exhaustion factor of looking at one AI, trying another AI, double checking it, making sure it's right. Will you talk through that? I think you said you found that.

Co-Host 1: Giles Male (05:52):

Yeah, I really have, and it's actually, I think it's an extension of what a lot of people take Excel as a community felt anyway, with the advancements of things like dynamic arrays and RegX and Python and Excel, everybody in this space I think was almost without exception, was already feeling like they were behind. And it is mentally exhausting to constantly wake up every day and every week and think, oh God, I'm behind. If somebody asks me the wrong question, I'm going to get caught out. I'm an idiot. I should be here and I'm here. And for me personally, the AI wave has just taken that to another level. And the strange thing is we are probably as a three not coming from a technical AI background. I would bet that the three of us are actually more in the weeds of this and more clued up than almost everybody because I think the reality is also most people are not that bothered. If you step out of our bubbles, most people are not using AI that crazily,

Co-Host 2: Ian Schnoor  (06:53):

And if they are their new Google search engine, people are still using it to find the best vacation. What's the best beach to go to, right? They're using it instead of search, which is good, but to flip into agentic capabilities to make it do something for you, I have it build reports, you're having it build training content that's a whole different realm and you're finding it's taking a different sort of mental taxation.

Co-Host 1: Giles Male (07:17):

Well also every time I open up LinkedIn, I'm struggling with LinkedIn even more than I usually do at the moment. Some of it's really bizarre. It can even just be an overload of positive, genuine information. Anthropic, oh, here's the latest thing. Almost every day Anthropic has got a latest feature and I'm just sat there thinking I haven't caught up with the last five that you released. So even that side of it, when it's just positive like, hey, here's a new thing. You're like, oh God, okay, I've got to learn how to do that. I've been on a real journey with projects and skills more recently, which I think we'll come on to talk about, which is great. But every day I look online, there's another thing that I need to learn just to kind of keep up with all of the possibilities

Host: Paul Barnhurst (08:03):

Straight back there. There's the hype, which we've all seen. I've seen people, someone on LinkedIn trying to claim that 90% of all analysis work done by Claude or Excel is dead and nobody is using it and you're just like, okay, I saw you a bit.

Co-Host 1: Giles Male (08:17):

You

Host: Paul Barnhurst (08:17):

See

Co-Host 1: Giles Male (08:17):

This is Paul. Paul did me, I was very proud. He bit

Co-Host 2: Ian Schnoor  (08:23):

He bit, where did you bite Paul?

Host: Paul Barnhurst (08:26):

I did bite on a few people's stupid posts. You did too. You just had to do it. I had to mean there is hype and then there's just bs and sometimes I just have to call out the bs. Here's three I've seen in the last week that I was just three or four. I've seen one build a three statement model in Excel in five minutes. No, I'm not putting that in front of the boss after five minutes. I think we can all agree that shit's not realistic.

Co-Host 2: Ian Schnoor  (08:55):

Keep going. But I will just tell you just before we started our show today, asked GPT 5.2 to build a three statement model. It took an hour, it's just finished. I just went to look at it. It's just finished if you like. I can show it. I sure hope nobody shows this to their boss, but I

Host: Paul Barnhurst (09:12):

Think we will show it. We'll do that here a little later. I think it will highlight what we're talking about. So that was the first example. I saw that email this morning and I just cringed. I saw another one that basically said if you're a CFO and you're not using AI, almost to the effect of you should be fired. And it's like, okay, if they're doing a good job everywhere else and they're still doing well as a business, should they be learning about it? Yes, but let's pump the brakes a little. The other two I saw 90% of financial analysis is now done by Claude. So I ran a poll asking LinkedIn, which we know the average person on LinkedIn is going to be head of the people in general, from everybody I've talked to, 80% said zero to 20% of their work or their analysis is in Claude, which probably means society is probably closer than 90, 95% of the general population.

(09:59):

And then the other one was Excel is dead. We should all code our models using Claude code and never use Excel again. And it's like, okay, first of all, Claude Code. So you're now asking me to use a language, I don't know, how do I validate that model? How do I have true confidence? If we're going to train everybody differently and go toward coding, fine, but that's not an overnight flip of a switch and everybody's now coding with quad code. That's a 10 year journey if not longer. So Excel isn't going anywhere until we make that switch. So those were some of the things I saw that I was just like, are you kidding me? But for me, I'll share an area, two areas I've kind of been exploring a little bit, but I use Claude all the time, to help me build a tonne of website pages.

(10:44):

I got tired building 'em all in Squarespace and so I'm like, just write HTML and CSS and do it all and then I'll edit. And so I'm learning more and more. Now if I had known HTML better, would they be a lot better? Yes. But I can get there now where I couldn't before using a prebuilt tool. So that's been pretty cool. I've been starting to do some instructions, starting to play a little bit with skills like Giles mentioned and I'll share a little bit of that later. So that's where I'm at and just trying to think, okay, I've had people come and ask me for training and I'm like, I'm not ready to give you, I want Claude Cowork and Claude Code and Claude in Excel and I'm like, yeah, you need to go to somebody else. I will not give you a good training. And so just trying to figure out I think with everybody, okay, what are you comfortable doing? I know you're working on some training Giles and you're probably like, am I really the person to do this sometimes?

Co-Host 1: Giles Male (11:35):

Well, and I'm really happy to be open about this stuff because my business with Miles has been Excel financial modelling training and you just sensed a shift in all of the conversations that you had or we had with clients, existing clients, potential clients. We've got a lot of B2B business, you felt the shift, everybody was suddenly really focused on AI and in quite a lot of cases they'd already topped down, invested hundreds of thousands of pounds and they didn't know how to use it. So even though I pushed back on the hype, there was a very strong feeling late last year, I think I spoke to you about this, Paul, I was like, yeah, but we've got to be in this conversation. I am not prepared to say or do things that I don't believe in, but we're going to have to move into this space because this is the space everybody's in.

(12:30):

And I still feel that's true. I still feel like teaching certain aspects of copilot or whatever, Claude is very tough because it changes every week. But I do also believe there are some fundamental principles that you can change very well and I am just waiting for the moment. So as you said, Paul, I'm stepping very heavily into AI training probably over the next six months with Faye who I'm doing the X on the road project with and I still have a feeling we're going to walk into some company doors and they'll say, Hey, we need to use copilot, and we'll review what they do and we'll see people manually copying and pasting into Excel files and it's like, Hey, there's a whole journey that we need to help you fix before you just throw a copilot at this. If your data is rubbish, your data's still going to be rubbish even if you throw a copilot on it. So that's kind of where I think it's going to go for us.

Host: Paul Barnhurst (13:20):

Yeah, well I think there's two foundations here for companies. There's a data foundation and there's a knowledge foundation depending on what you're doing. You can do things with ai, with a bad data foundation, you can do things with ai with a bad knowledge foundation, but to what level, how much are you limiting yourself? It's just like financial modelling and Excel. We've all seen people that don't really know modelling and really don't know Excel, build financial models, those are usually the ones you end up getting and you spend a week rebuilding. They're terrible, right? We've all inherited a few like that. We've built a few probably over our career at some point that we're not proud of and I just worry that how much is that going to multiply with AI because it's so easy and it can look so good. So it's interesting even in training like, oh yeah, I can train on stuff, but how good is my training? How well do I know? I know I walk into some places and I'm like, okay, these people know more than I do. Why am I here training?

Co-Host 1: Giles Male (14:15):

Actually if you'd been in ai, because obviously AI and machine learning has been around for decades, imagine there must be a whole community of machine learning experts looking at the new wave of AI experts popping up on LinkedIn every day thinking, what is going on?

Host: Paul Barnhurst (14:32):

Well, I can tell you what a few of 'em think because I know one, I know a few, I've interviewed several and they're like learning how to prompt the learning how to work with the tool is not the same as being an expert in the tool. Like a developer, like the product team at Microsoft understands excel in a completely different way than we do. We are experts at using the tool or we'd like to think of ourselves as experts or are we experts at understanding how they write a formula? Those are the machine learning guys, but many of 'em also, just like many of the product people, they understand the product. They also understand how to use Excel. Many of those machine learning people understand how it works to get better results, then they also know how to build it and I think there's something to be learned from those people.

Co-Host 1: Giles Male (15:17):

I do also think this is where you, Paul, you and I are involved in training Ian, obviously you've got huge community at FMI. This is where there's still a place for all of us. I certainly hope there is where if you are good trainers, if you're good communicators and you can take people from A to B, well, I still think there is huge value in us learning these tools. Like you said, probably not as a machine learning technical specialist, but as a, alright, here's this thing. How do you go about using it effectively? If you are investing hundreds of thousands of pounds in X number of licences, what does that look like? Because the answer in a lot of cases I think is still going to be AI is not the solve the same way that just throwing power query everything isn't the solve. It's a brilliant tool, but you've got to use it in the right places.

Host: Paul Barnhurst (16:10):

I'll add one more thing here and then we can keep going, but I think you hit on something that's really important. There's a skill within this. So I think there's two challenges just with people learning AI in general. One, I saw a report that said corporate finance and I think it's true in many different finance roles had one of the highest requirements for soft skills and technical skills was like Ty for that was a Harvard study seven years ago. Like any profession, so we have to learn a lot of soft and technical skills already. Now you add ai, at what level should we be learning AI versus using third party tools? I think there's a lot to be figured out there. And then there's a whole part of this of when is AI the right tool? When is it the right tool to return the answer?

(16:51):

When is it the right tool to write code that returns the answer? Because there's a deterministic and a probabilistic side here that a lot of people don't understand and that makes a huge difference in the answer you, you do not want clot itself doing the math. You want it to code it or use another tool than the actual LLM to do the math and there's just so much switching costs or decision and learning all that and really being able to validate it all that it's tiring for a lot of people I think. And it's a question of how much of it is it worth? How much do you do it? So I still think there's so much to figure out and just what's the best path to use AI as an augmentation? There's no question it's incredibly valuable. There's no question that everybody working in financial modelling should at least know how to use it. Now does that mean they're using it every day? It depends on what they're doing and how much they're using it, but there's so much beyond that, it's just we're all guessing to a certain extent. At least that's how I feel in your thoughts.

Co-Host 2: Ian Schnoor  (17:57):

I feel like we are in the eye of the storm, we're inside the hurricane, we know that there's chaos swirling around us. We're experimenting and learning things, but we don't know what it looks like. We don't really know. And you said this, we don't really know what it looks like on the outside. We don't know what it looks like when we get there. I know that at some point in one year or two years or three years, the dust will have to settle because I don't believe this kind of level of all technologies in history, all disruptive technologies have exploded out for a period of time and then slowed down, eased up. We know that this pace of energy usage, we know that this pace of the LLMs bleeding cache as quickly as they cannot continue forever. So at some point the new capabilities will start to slow down.

(18:42):

We will all get into a rhythm of how we use it. We will either all be gone from jobs or as I think we will not be gone from jobs and then we'll get into a more, but we're not there yet. And as you saw, I wrote an article on this recently and other people have written articles on what that future looks like on the other side of the hurricane wall. I think a lot of people are going to feel more comfortable once we're there so that they can, okay, this is what my new life has to look like. And so I think we're all spinning a little bit just trying to keep up and trying to figure out what the world will look like when we get through it and have a bit of benefit of hindsight. So that's kind of my thinking and my message for everybody is just try to do your bit to stay up, keep doing your job, try to enjoy it for what it can do. Try not to get down when you feel like you're behind and oh my gosh, I don't know how to code yet. I haven't built a single app yet, I haven't automated all my workflows yet. I try say, it's okay, just try and keep up to the extent you can. I don't know Giles, how you feel about all that.

Co-Host 1: Giles Male (19:44):

Very similar. I mean if that would probably be my first bit of advice to anyone now is just start trying it, try and get some exposure to these tools. I mean I love Claude. Like I said, I'm using it every day. There will be so many things I'm not doing particularly well with Claude that other people would do better, but actually just playing around with it and learning the little features that it has is amazing. And I dunno if this is the right time to come onto this. You touched on something Ian. At some stage this must slow down. Maybe that's going to be linked to pricing because at the moment I pay 75 quid a month for my clawed subscription. I never hit the limit. It always amazed me the amount of value I'm getting. I never hit the limit. And I just wonder, I saw some rumours and some articles being posted about this, whether pricing is going to really ramp up soon and then what happens? What if it doesn't cost the average person 75 quid a month to get value from a tool? What if it's 500 quid a month?

Co-Host 2: Ian Schnoor  (20:49):

You must be paying, you must because I'm only paying like $25 a month. So I guess you're at a different level than I am and I've never hit the limit on mine either. And so you're right. I mean we agree. I'm the only

Host: Paul Barnhurst (21:02):

One who's hit the limit. I'm at 20 a month, but I have hit my limit a few times. I've had to buy extra usage. I'm too cheap to go up to the 75 giant.

Co-Host 2: Ian Schnoor  (21:10):

No, I must be on the same plan as you then Paul and I.

Host: Paul Barnhurst (21:12):

Yeah, I think you are Canadian 25 US 20. Makes sense.

Co-Host 2: Ian Schnoor  (21:16):

Sure. So what I would encourage people to do is use it for more than just research purposes. Use it for some creation for agents as a one example, here's a fun example. I had to do a long drive last month for a few hours into the US and I thought I learned I could speak to Claude, I had it on my phone, I could speak to it and I said, why don't you teach me something and then test me on it? So it said, great, what do you want me to teach you about? So we came up with a couple of topics and we did a bunch of things. It would share some information with me and then I got it to build a quiz. It was all driving a car, build a quiz, test me on it, if I miss it, test it again, revise the test as a joke.

(21:55):

We were starting to do states and state capitals, but you could do anything I would encourage people to do. You can learn with it, you can really get it to build things, get it to build documents, get it to kind of create summaries of anything and you're like get it to build spreadsheets. Simple things at first that are low stakes because it's pretty powerful. I would love to show this model that I just had GPT build. You'll just quickly see what it's doing. But yes, I think it's most mentally satisfying when it's like your peer, when it's your peer and you can challenge it. Every time I challenge then AI on something and I say, really? Are you sure? It almost always says, oh my gosh, you're right. I should have done blah blah blah blah blah. They do need to be corrected and you need to know how to push back and have that confidence like you would a colleagues peer. Absolutely.

Co-Host 1: Giles Male (22:47):

Can I share a use case more and more, so here's one from Claude, right? I had a technical challenge. There's a LinkedIn live thing that I want to set up next week, never done before and I was trying to figure out how to link my zoom meeting to the live link. You need to set up all these URLs to link between them Anyway, zoom has got its own AI assistant and I was trying to share a screenshot saying, Hey, I dunno how to do this and Zoom's AI system is coming about going, I can't read any images, I can't look at this, blah blah blah useless. Claude has never let me down. It doesn't matter what other tool I'm in. If I screenshot, hey this is what I'm seeing, this is what I'm trying to do. Claude is brilliant at going, yep, you need to click in this area, go to this part of setting, scroll down, it's there. Always gets it right. It's mad and this is what I'm fighting more and more with Claude. I never hit those frustrations that I had with copilot and other tools all the time where you're like, come on man, this is obvious that this is wrong. It's generally pretty good.

Co-Host 2: Ian Schnoor  (23:52):

Yes, yes, I would agree with you. Mine speaks to me. It knows me. It jokes around with me. It's pretty wild. And did you want to show something? Yeah,

Co-Host 1: Giles Male (24:02):

It was more of a story I would be able to show is me mucking up the zoom link about five times if I recorded it?

Co-Host 2: Ian Schnoor  (24:10):

Yeah, absolutely. Do you want to

Host: Paul Barnhurst (24:11):

Show what your model is there?

Co-Host 2: Ian Schnoor  (24:13):

Like I lift here, this is hot off the presses. It has come a long, long way. Looks

Host: Paul Barnhurst (24:19):

Like we have

Co-Host 2: Ian Schnoor  (24:19):

The

Host: Paul Barnhurst (24:19):

Henderson case again.

Co-Host 2: Ian Schnoor  (24:21):

Yeah, this is the Henderson case again. Again, I'm trying to keep that part consistent and so once again I did this exact same thing last episode when we did it with Claude, but this time in my copilot I forced it when I ran copilot, I forced it to use GPT 5.2. So I know that it's using GP two 5.2 and whatever other secret sauce that Microsoft puts on top of that at this point. But this was the case and I literally just very simply said in the prompt, hey this is the case on the case sheet. I gave it three years of historicals and I said build me a five-year forecast including common schedules, a revenue schedule, cost schedule, asset schedule, et cetera. So again, here's all the information, the sales price information, the cost information, the asset information, income tax, debt equity, and then I said that there are some instructions.

(25:10):

All I gave it was three years of historical statements and this is what it built. Let's take a look at this together. You guys tell me how comfortable you would be sharing this with your boss. Now first of all, it is doing something right? It's built, it's added all these sheets here, which is impressive. It's got an assumption sheet, schedules a summary, nothing on the summary yet, but so look what it did here though. So first of all it's got a revenue line and we all know that a good modeller knows. We try to minimise the length and extent of formulas on the financial statements, but this is what it did. The first thing I saw was the gross revenue. So it's using a choose function. So it's fine to pull, it's trying to pull and it's got a bit of an old array in here.

(25:59):

It's using a match function to find probably the word whatever it's assumptions B three. So it's looking for the word income statement. This is B three. It's looking for the word income statement and then trying to find the base best or worst and then pulling something off the schedules. Okay, so alright, that's what it's doing to get revenue. And then I quickly discovered the cost and even this on the net. Net is not a B, even the totals, well I think I already found a quick error. The total cost should be a simple sum function. Look what they did here in total costs for 2026 happens to be 2024 plus 2025. Does that look right? Jo's right Gilles isn't it? That's fine, isn't it? That should be good enough, right? Good. I mean I can already see that the cost of sales is 1 74 sg and A is four, so that should be 1 78, it's got 3 39 whatever close enough, right Paul? I mean as we say, good enough for government work is what I want and I would always pay good enough. It's got an ebitda, so this is what it's doing. And then of course clearly

Co-Host 1: Giles Male (27:02):

That's also clearly not flowing through to the net income, right? No way you'd have a positive net income with all those costs.

Co-Host 2: Ian Schnoor  (27:10):

Yeah, I mean look, we've got, you're right. Something strange is happening because, well, it's not quite working. Exactly. So

Host: Paul Barnhurst (27:17):

Ebit, so what it's doing as the EBITDA choose as well, I think it's pulling, it's not summing go to,

Co-Host 2: Ian Schnoor  (27:23):

It doesn't even make sense. It has revenue of two 50 costs of three 30. It's higher. So already EBITDA should be negative. EBITDA should be negative 80, but it's plus 70. Oh wow. Because it's pulling

Host: Paul Barnhurst (27:36):

It all off the schedules for everything else and it didn't pull total cost off the

Co-Host 2: Ian Schnoor  (27:41):

Schedule, right? I mean the point is this, you cannot dream about building a model and then just flipping it to your boss unless you're not that worried about your future job longevity. And then we all know this, right? A cashflow statement should be the simplest statement in the world because of the operating activities, there's add backs. We're trying to arrive, we're trying to add back all the accruals that were being accrued on the income statements. So a net income should just be linked to the income statement. But of course here it's not. It's the same. So really what it's doing is it's even the total, even the total is. So let's go take a look then it's clearly built all the financial statements by pulling data off of schedules. So let's go see what these schedules look like. So this is the base case of integrated schedules, but the schedules, most people who build model schedules, just being the calculation schedules, the revenues, the cost. But in this case we now know that there must be financial statements in here. So I don't know. I mean we'll take a look here. I'm just looking at this with you guys for the first time. There's a sales volume, okay, long sales price, sales price, it's multiplying two things together. I'm not sure why it's multiplying. Maybe

Host: Paul Barnhurst (28:51):

Inflation of the assumption sheet.

Co-Host 2: Ian Schnoor  (28:53):

Don't know. I'd have to go through, it's not super clear. So there's a bunch of formulas. It's getting down to revenue, it's building up some costs. It seems to be getting worse. These are okay and it's not tragic, but certainly wouldn't want to hand this in. This is much more than just schedules. This is, well it's revenues, it's costs, it's fixed assets. Then it's doing something interesting. It's got a bunch of balance sheet calculations going on here. It's actually not scheduled. It seems to have built all the financial statements right here doing the same thing again with the best case. And then the same thing with the worst case. So it's like it built three separate models on the schedule sheet and then the financial statements are just pulling either the financial statements at this point are just pulling either the base the best or the worst case

Host: Paul Barnhurst (29:43):

Unless they decide to pull the prior two years just for fun.

Co-Host 2: Ian Schnoor  (29:46):

Oh, just for fun. Sometimes it's just for fun, right Paul? It's just fun. Why bother pulling the cost when you could add up the last two years?

Co-Host 1: Giles Male (29:53):

This is really poor and even compared to other things we've seen, this is pretty bad,

Co-Host 2: Ian Schnoor  (29:59):

But would you agree in some ways this is a little bit scarier and more shocking back in the days, which felt like a hundred years ago when we asked it in the fall in November, December to build models and it just couldn't, at least we knew it couldn't. This is a little scary to consider that someone out there who doesn't know much about modelling might say, Hey copilot, build me a model and say it just the way a university student or a high school student might say, build me an essay on Shakespeare on Hamlet and not read it. You might build this model on Henderson and not read it and hand it in and you're going to be in for a real world of hurt at that point. That's what it's doing. I mean that's all that's going on here. This is going to take me longer. I can now tell you from this model, this is going to take me longer to modify this than it would've been just to build it at this point. Oh

Co-Host 1: Giles Male (30:52):

Yeah. And this

Co-Host 2: Ian Schnoor  (30:54):

Took one hour wait, this took one hour to do this. J what were you saying?

Co-Host 1: Giles Male (30:58):

I agree without question, that'll take longer to fix than to just have started from scratch.

Host: Paul Barnhurst (31:04):

Basically what you're going to end up doing is just rebuilding the whole thing because the logical flow is just not right here.

Co-Host 2: Ian Schnoor  (31:12):

What worries me is I don't know how this was trained, I don't know many people who build models where they build a full base case model, a full best case, a full worst case in its entirety and then just right, and then just pick the one they want to show and then they treat the financial statements as summary pick. I don't know, have you seen that guys? I mean because models can get big. That means you're building every formula three times or four, right? The whole idea of switching is to have key variables switch and run scenarios

Co-Host 1: Giles Male (31:44):

On. Yeah, I've not seen it, but I think this comes back to when we're asking these LLMs to the tools around them to build models. This is not what they're good at. They are pattern recognition tools. So whatever the data they've got access to it is not the same as here is the exact way you should build a financial model every time. It's to Paul's point, this is a probabilistic process where it is trying to predict the best patent match for the data that it's got and whatever you've prompted and that's why I think you get random output sometimes.

Host: Paul Barnhurst (32:20):

The closest I've seen is I've seen some situations when you're changing a tonne of scenarios where somebody make, take the base case and create a second file and call it worst and adjust all the assumptions they can compare them versus trying to switch 'em and save 'em all in one file depending on complexity. I've seen that a few times, but I have not seen three separate schedules like that with everything.

Co-Host 2: Ian Schnoor  (32:43):

And look at this Paul, at least we can give it some brownie points for being consistent. It simply carried the total costs every year. Is the

Co-Host 1: Giles Male (32:52):

Cost growth

Co-Host 2: Ian Schnoor  (32:53):

Summation of the prior, does

Host: Paul Barnhurst (32:55):

The balance sheet balance down below? What's it show it? I'm just curious. Think

Co-Host 1: Giles Male (32:59):

It did shockingly, but I bet it not the way you might want it. See

Co-Host 2: Ian Schnoor  (33:03):

It did, but while we're on it just for fun here, so it is, I noticed that it is, but I wonder, I mean I wonder what, so the long-term, I'm curious to know if every, again, I think we know that the balance sheet is just pulling with choose functions. So I suspect it actually asked me during the build process. It did something when it built up the revolver. But again, let's see what it's doing. Let's see if any of this gets revolver draw. Yeah, I actually think it's doing a decent job of the revolver. I think it's, I'm not sure. I'd have to look at it more closely. It did not plug, it did not plug the balance sheet. It is doing something that I think is reasonable, but this model has a lot of, but your problems then simply this issue, this is not ready

Host: Paul Barnhurst (33:54):

For, oh yeah, there's bigger problems. I was just curious if, I mean that's one step that we've made in general now for basic models they can at least get your balance sheet, but they don't. This is where a lot of guidance and instructions are needed. The pattern matches and you never quite know what they're going to build and does it make sense? Claude did a pretty good job of making sense of how we built it and just changing a couple of assumptions here. I had the idea, well I should build three completely different scenarios. No,

Co-Host 2: Ian Schnoor  (34:29):

And I don't know where it got that idea. So that worries me because that's not typically considered. But anyone working with this model now will have to know how to give it feedback on design structure, formatting. They'll have to know how to explain every single element. Say no revenue has to work like this, cost has like you will. This just continued further proof and validation that you will have to understand every line. Perfect. As the three of us have predicted, you will have to understand every line perfectly in order to coach AI to get it right and it's fine. I'm fine with that. But yeah, interesting to see it. Yeah, it's been interesting time. It will be interesting to see where this all takes us.

Co-Host 1: Giles Male (35:09):

Can I share a couple of thoughts?

Co-Host 2: Ian Schnoor  (35:11):

Please

Host: Paul Barnhurst (35:11):

Go ahead and then I'm going to share something on my screen and we'll wrap up real quick.

Co-Host 1: Giles Male (35:15):

So in AI's defence slightly just general AI, I think it would make a difference depending on the tool that you're using. Microsoft's got its kind of prebuilt agents, things like analysts and researchers and other things. So a particular agent may have done a better job of what you were trying to get it to do, but also, so I always feel like I'm just shooting myself in the foot by saying this in terms of MVP status. Microsoft hasn't helped. Microsoft is leading the way on the hype with the advertising of, hey, here's a business owner that just hands everything off to copilot. I don't think that helps. So I feel like MVPs and everyone else do have a bit of a responsibility to call this stuff out and be like, Hey, hey, hey, it's not there yet. You can't click this. You can't hand over your entire finance function to copilot yet. I have no doubt we'll get there, but I thought I'd throw that bomb. So Paul, do you want to join in and see if you can get excluded from the MVP community within the next month?

Host: Paul Barnhurst (36:21):

I'll get myself in trouble for this one. I'll tell a joke. This will get me kicked out. Does anyone know what the first product Microsoft will make? That doesn't suck.

Co-Host 1: Giles Male (36:31):

No, go on. Hit me with it.

Host: Paul Barnhurst (36:33):

A vacuum.

Co-Host 2: Ian Schnoor  (36:37):

I like Microsoft products and I guess I know you do too. I know you do too. You're often wearing your Excel hat. But listen, it's always a funny joke around. Listen, the certainly been cutting edge for a long time, leading edge on some amazing productivity tools that have benefited all of us. I guess we have continued high expectations and we should and that's fine. And we all know we're going to get there, but I think we're living, we're feeling a little bit angsty. And when I say I think I speak on behalf of the finance world, the accounting world, the business world, the numeric world, we're all feeling a little angsty. None of us loves having our cheese moved, right? I dunno if any of you read that book. We're being disrupted. We are all being disrupted and we are either putting our head in the sand and pretending it's not going to happen to us, not the three of us, but some are. When you do that, you still feel stressed because you know that it's coming for you or you're trying to stay on top of it like you're doing actively Giles and you're still feeling you can't catch up. We all feel tired and when there's tremendous change and disruption, and I think we're all looking forward to a time when things start to settle down. In the meantime, we do have some great trending and some great tools and we'll see where this all goes. You wanted to share something, Paul?

Host: Paul Barnhurst (37:56):

I'm going to share something here. And in all, seriously, I agree with you. I think Microsoft has a lot of great product. I make fun. I've been a little disappointed with some of the way that they've rolled out copilot. I think most people have, but I also realise they're serving an existing customer base that is a lot bigger and they have to take a different approach than a much smaller company can. So it's just going to take longer, but they'll get there. They're smart people. But what I wanted to share, and this gets back to building skills and instruction. So I put a sheet in my workbook. I said this one skill, create a professional variance chart. This was originally what it was. I ended up just doing a column chart in the end, but it was going to try to do two. So I did an instruction and just told it, Hey, here are the rules.

(38:41):

Here's where the data requirements, here's what colour I want the bars, here's what I want the gap with. And so here's what I got when I asked, I think it was Claude to build this one. I asked it to build it and it gave me two charts. This beautiful one here where it didn't format things right. I love my accesses all zeros and all my labels are zero. Then it redid it and okay, better. But I gave it my instructions, told it what I wanted to look like and I got that, which is what I would've built. So just showing the difference of hey, one page of instructions, use those instructions. Did the same thing with the waterfall chart. That one gets more interesting. There are some areas that just in traditional Excel, getting it to set the ending value to a total is near impossible. I've only found one person who's been successful with an agent of doing it and they wrote a tonne of complex code within their agent to get it to work. I'm like, yeah, no, I'm not doing that. So here's what I got from Claude. Here's was Claude's waterfall chart.

(39:43):

It was rubbish. I've had better ones, but it started with the actuals I the beginning, but the budget is the end and did it all as a call. Not even a what,

Co-Host 2: Ian Schnoor  (39:52):

But sorry, are we just talking here that the orientation is off? I mean you couldn't.

Host: Paul Barnhurst (39:56):

Well, no. So what it did is it built its own math, trying to do a base and increase a decrease. It put the actuals top when the actuals are at the end. And so it didn't even use the waterfall type chart. They tried to use bar the old method before 2016 waterfalls. So very inefficient. And yes, then that's off, but it got the colour. So not everything's wrong.

Co-Host 1: Giles Male (40:22):

It's not a waterfall, it's just it's on the wrong axis. It's just

Host: Paul Barnhurst (40:27):

If you flipped it, it would be better, but you'd also want the budget to be the beginning. So it's not in the order is a little weird too. So gave it a skill and the only thing it had off is this was not set as a total. I think I changed that manually, but you could see, okay, this is a little small here, I'd have to make these larger. Now I'm not sure what happened. Different computer I think, but I could make them bigger, but gave me much closer to what I wanted once I gave it instruction. There we go. And similar again, no skill. What I found is none of 'em will get this total. Right? The second one, first one to all set. So I did that versus again, I had to set the total but got the colours that I wanted, got a much closer with all of 'em.

(41:15):

Whether it was copilot using the exact same skill, you get much better, much closer to what you want. Is it perfect with any of them? No. Are there little things like setting as a total or whatever? Yes. But what I'm finding is whether it be formatting type things, whether it be visuals, whether it be models, you can use ai, give it your formatting and it will do a good job there. Those are pretty straightforward instruction. I ask it to format a really complex model I had with like 30 sheets and some of 'em I shouldn't have asked it to format. I knew it would mess up. But on the whole, given how complex the thing was, it did quite a good job with the instructions better than they would've expected. So that's the thing I'm finding now is whether it's writing a skill in Claude or putting in a skill sheet in your file to give it instructions, the closer you want it to get to something, the more particular you want it to be, the more you need to start working with some kind of instruction sheet to give it that additional guardrails. Like you said, Giles, the project, the skill. And so that's stuff I'm starting to play with and I've been doing a little more on visuals lately, but just in general, how can you go to that next level of giving it the instruction? So every time you get much similar results.

Co-Host 1: Giles Male (42:34):

I remember Brian saying that on your podcast as well, Paul, pretty sure it came from Brian, right? Yes, it was Brian Jones. Good episode. Listening to him and yeah, the instructions tab in a workbook. I made a note of that straight away.

Host: Paul Barnhurst (42:49):

Let's go ahead and wrap up. I know we've been going for a while, almost 45 minutes. So last thoughts, we'll go around the horn. Maybe we'll start with you, Giles. We'll give in last word here.

Co-Host 1: Giles Male (42:59):

I continue to be amazed by how quickly this is all developing. I'm amazed by, to your point Ian, how tiring it is. I dunno, anyone that is keeping up with everything, I didn't think LinkedIn could get worse, but the AI revolution has made it even worse. So

Co-Host 2: Ian Schnoor  (43:21):

You could leave. You could I just, it's

Co-Host 1: Giles Male (43:24):

Getting closed.

Co-Host 2: Ian Schnoor  (43:26):

Save yourself some angst. What's that, Paul?

Host: Paul Barnhurst (43:29):

I did put a poll out saying, is it better today than it was a year ago or is it worse? So you can go vote on that. Giles 80% said it's worse.

Co-Host 1: Giles Male (43:36):

So

Host: Paul Barnhurst (43:36):

Bad. So you'll be in good company.

Co-Host 1: Giles Male (43:38):

Oh God, it's so bad. I need to make an agent to just act on my behalf and I'll just give up.

Co-Host 2: Ian Schnoor  (43:46):

That must be possible sooner now, isn't it?

Co-Host 1: Giles Male (43:49):

Probably now.

Co-Host 2: Ian Schnoor  (43:50):

Probably.

Co-Host 1: Giles Male (43:51):

Probably

Co-Host 2: Ian Schnoor  (43:51):

Now.

Host: Paul Barnhurst (43:52):

Probably go banned by

Co-Host 2: Ian Schnoor  (43:54):

LinkedIn. Say, Charles, do not look at LinkedIn. There's some stuff you're not going to like. I'm going to deal with it. I got some comments. We're going to comment together.

Host: Paul Barnhurst (44:01):

Just give me access. I'll comment for you, Giles. You don't have to worry. I'll all be professional.

Co-Host 1: Giles Male (44:05):

Yeah, there's no way I'm letting you loose on my

Host: Paul Barnhurst (44:10):

Right decision.

Co-Host 1: Giles Male (44:12):

But that's it. Yeah, crazy times.

Co-Host 2: Ian Schnoor  (44:13):

Same. I mean, we're still in the eye of the storm. I'm having more fun and more excited with technology than I'm going to, honestly, than I've ever had before. I am having more fun and more enthusiasm and more excitement about the things I'm doing and able to do. I gave examples of using it for research. I mean, just today I said, can you check something on the websites of 50 companies and see if they're talking about this topic? And within 30 seconds it had checked all of them and said, yeah, this couple are talking about it. And these ones would've taken. And if I asked it, it would've built me a nice summary report and a table and explained it. I mean, the speed that you can dream up an idea and get something is fun and you can train your agents to know you and joke around with you and have fun. I'm having a lot of fun and I'm also simultaneously scared. Beyond all wildly reading about Mythos Giles, I'm guessing you're one of the 40 organisations in the world that's been given preview access into Claude's mythos, right?

Co-Host 1: Giles Male (45:14):

That is terrifying to mythos.

Co-Host 2: Ian Schnoor  (45:15):

It's terrifying, right? It's terrifying to think that Mythos actually got out of it was locked into a room and it got out and it emailed its controller, it's programmer, and said, Hey buddy, I got out and I have an idea for you. I mean, it's terrifying. I'm not going to lie. And anyone who says they're not terrified is lying. I mean, we just read about Sam Altman, Sam Altman's latest blog where he said, listen, and I don't know him. I trust him. He said, we are driven to improve humanity. We are driven to improve the lot of people in the world and improve their lot in life. That's a very noble, valid goal. I support that goal, but it should scare the heck out of anyone when we see what it's capable. The disruptive capabilities are wild. So I'm dealing with both extremes, probably just like the rest of you right now.

Host: Paul Barnhurst (46:06):

I think this part calls for some kind of horror odyssey 2050 or whatever it was called. One of those movies, Hal Pop up on the screen. That's what I'm envisioning. But it's crazy because in some ways we're seeing some of that, right? There's just real fear of how far can AI go, what will it do? How will it impact our lives? And I think the bottom line is we can't control the future. Everything is on the table for change and disruption. That's in our professions, in modelling and in any work we do that requires analysis and thought it can be automated by a computer at the same time, it's still an augmentation tool today. It's still something we need to learn. So just dive in and make yourself more valuable. Enjoy it. Enjoy the journey. And we're all going through the storm together. We'll figure it out. I don't think the world's going to come in and we're all going to be sitting at home poor, but we'll see what the future holds. If we are, we can all hang out together in Giles rv.

Co-Host 2: Ian Schnoor  (47:03):

In the rv. Maybe we'll get to the moon. That'd be fun. And then just not have to worry about any of this, but yep.

Host: Paul Barnhurst (47:11):

Alrighty. Well thank you again everybody for joining us for another episode and we'll be back to do some more exciting testing and show how AI is just not quite there yet.


Next
Next

Evergreen Fund Growth and the NAV Valuation Challenges Financial Modelers Face with Rafael