AI in the Finance Team for CFOs to Lead Bottom-Up Adoption with Excel Agents and Shadow Tools
In this special episode of Future Finance, your hosts, Glenn Hopper and Paul Barnhurst, explore how artificial intelligence is making waves in the finance world. They explore two distinct approaches to AI adoption: top-down initiatives, driven by company leadership, and bottom-up strategies, where employees are introducing AI tools on their own. Glenn and Paul also discuss the latest AI tools in Microsoft Excel, including AI agents, and how they’re transforming financial modeling and decision-making. Tune in for a mix of practical advice, real-world experiences, and a bit of humor along the way.
In this episode, you will discover:
How top-down and bottom-up AI adoption strategies differ in business.
Why employee-driven AI innovation is on the rise and how to manage it.
The role of AI-powered tools like Excel Labs in transforming financial work.
How AI agents can help upskill finance professionals and streamline tasks.
Glenn and Paul explore the evolving role of AI in finance, from top-down strategies to employee-driven innovation. They highlight the potential of AI tools like Excel Labs and AI agents to transform financial processes. Tune in for insights on how embracing AI can help professionals stay ahead in a rapidly changing field.
Join hosts Glenn and Paul as they unravel the complexities of AI in finance:
Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii
Follow Paul:
LinkedIn - https://www.linkedin.com/in/thefpandaguy
Follow QFlow.AI:
Website - https://bit.ly/4i1Ekjg
Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.
Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.
In Today’s Episode:[05:17] - Glenn’s New Book for Wiley
[08:21] - The AI Budget Mandate
[12:11] - Managing Bottom-Up AI Adoption
[15:39] - The Carol Office Question
[26:00] - Why Use Hard-Coded XLOOKUP?
[29:56] - Building and Customizing an AI Agent
[36:36] - Career Transitions and AI in FP&A
[39:50] - Final Thoughts and Wrapping Up
Full Show Transcript:
[00:00:57] Host 1: Paul Barnhurst: Welcome to another episode of Future Finance. I'm your co-host, the FP&A guy, the bearded wonder. Or as those closest to me, they call me Paul. I've been called all of three, and I've been called a lot worse. And I have my co-host with me, Glenn. How are you doing today, Glenn?
[00:01:44] Host 2: Glenn Hopper : Good. Except I don't have a cool nickname.
[00:01:46] Host 1: Paul Barnhurst: You laugh at this. So, unfortunately, I had to go to a funeral for a family friend. That's not the funny part, but this was a friend. I lived at their house when I was young. It was the father. He passed away and I was talking with his mom and as a kid, my nickname in that household, they always called me Barney. Okay, for the last name, you know, Barnhurst. And one day the mom calls the house and asks to speak to Barney, and my dad responds with, nobody by that name lives here. And you know. And she explained who he wanted to talk to. And he kind of got mad at her for using a nickname. My dad doesn't like nicknames. So I'm at the funeral and I go up to her and she hasn't seen me in years. And so when I have a beard and two, I don't look like a little kid anymore, right? And so it's been 15 years since I've last seen her. She's like, now, who are you? And I get my name. She is still kind of looking at me and she's like, were you the little kid? Like, oh yeah. It's like, oh, Barney. And she immediately. Then she knew who I was and told that story both times I saw her. It was pretty cute. I saw her, you know, the night before and the day of as I went to a viewing. And so, anyway, there's the whole nickname. I kind of went off on a tangent, but hopefully our audience gets a laugh out of that one. All right, so back to the show. Go ahead. You're going to say something, Glenn.
[00:02:55] Host 2: Glenn Hopper : I think, uh, it's funny, we just did one of these, but with as fast as technology moves, it's good to do check-ins. And I think we could probably there's enough content out there. We could do one of these every week. But I'm thinking about a couple of things today. And if this works for conversation. So we gotta talk about agents, right? That's I mean everybody talks about agents all the time. So we'll we'll have agents on the list. But then the other two things maybe I alluded to this a lot, but I think it'd be worth doing a little bit deeper dive. I want to talk about the two ways that companies are trying to deploy generative AI. One is top down, the big, you know, capital intensive projects where you're building custom workflows and all that. And the other. We're seeing a lot more success right now, bottom up, where employees are finding tools, whether they're sanctioned by the company or not, and they're using them to get more productive and not had a chance to mess with this. But if you've got access, I think I'd love to look at maybe the agent in Excel and see how that does.
[00:03:53] Host 1: Paul Barnhurst: Yeah, definitely. We'll take a look there for a minute. I have something open that we'll talk about. All right. So top down, bottom up. The thing I want to do I think will be fun and we'll share this is, you know, a video episode and audio. So we'll be as descriptive as we can. But I used a company called Delphi AI and I created my own agent. It's still beta. I haven't cleaned up how I want it to act as much as I'd like. You can write up to like, 3000 words, but I've given it all my data. Nearly 4 million words of data at this point. Yeah, right. That's a pretty hefty amount. All my podcasts, both off my website and through my YouTube channel. So I has them twice to try to hopefully improve the, you know, the quality. It's getting them two different ways and a bunch of other stuff. And so I thought it'd be fun to just go see what kind of questions people are asking, what the responses are like, and just kind of talk about what we're seeing, what sticks out to us. Because I've had something like 300 conversations now over the last, you know, a month and a half since I put it into beta mode. And so we'll do a little bit of that, maybe do a little bit of the Excel labs and just at least looking at the agent, talking a little bit about that. But before we do any of that, I think it would be really helpful. Right? I know you've written a book. I think we've talked about it before. I'm actually finally reading it. I got your copy signed and everything. I will be framing this back. I have your other book back here signed as well. So now I have two signed books. I can't wait for my third. So you got AI mastery for finance professionals. You're writing a new book for Wiley, if I remember right, you agreed with them. So what's that book about?
[00:05:23] Host 2: Glenn Hopper : It's funny. Like, why write three books on AI and finance? But it's moving so quickly and truthfully. Like my last book, I didn't really even go into generative AI that much. It came out after generative AI had already come out. But I'm, you know, we talk about it all the time. I think it's very important that people understand the underlying technology. And so I was really the last book was very much a textbook about, hey, these are the types of models. This is where you can apply them. This is the word generative AI is great, this is where it has problems. This is different machine learning algorithms, but the next one I think reflects what I'm seeing on the consulting side right now. It's called the AI ready CFO. And it is meant for CFOs who their domain expertise. Now we're asked more and more to be more technical, just like everybody is. But our domain expertise is in finance. So trying to break through and understand how to manage technology projects, it's still a bit of a stretch. So the whole idea with the next book is handling CFOs, putting them in a position where they can manage, run, monitor, track, find ROI, uh, handle the change management, the technology piece and all that for those big top down projects.
[00:06:39] Host 1: Paul Barnhurst: That will be exciting. So let's talk a little bit. You know, obviously there's Bottoms Up which we're seeing a lot of and there's top down. So maybe take a minute. Explain the difference between the two. And then what's the typical change management process for top down? And I think we'll talk a little bottom up and show a few things here.
[00:06:58] Host 2: Glenn Hopper : Bottom up is consumers who have jobs are finding tools on their own and using them at home and saying, wow, this really works. And then saying, I bet it would work at my job. And this could be Gemini ChatGPT plug-ins, different websites and all that, where people are finding ways to be better at any sort of digital task that they do. And it's sort of, you know, it's called shadow AI. Or there's sort of the Ethan Malik calls them the secret cyborgs, where people are using these tools and not telling anybody about it and not really understanding what safety risks there might be and all that. But they're finding such incredible productivity that it's motivating them to keep doing it. And companies are, you know, sort of oblivious to it. For the most part, the book talks about what I am helping with and obviously with training and setting policy and establishing guardrails and all that, we can help with the management and distribution of the sort of bottom up implementation of generative AI. But the ones that everybody is struggling with right now are okay. We've gotten a directive from the board from, you know, we've gotten a budget allocated. We're being asked by investors, management, whoever to do some AI without any other clear instructions around that. So do some AI. It's just too vague.
[00:08:21] Host 1: Paul Barnhurst: Here's $1 million next year in your budget, we expect to see, uh, the X productivity or Y or whatever. Go figure it out. Everybody else is doing it. There's plenty of opportunity. Here you go. Is that kind of the mandate?
[00:08:37] Host 2: Glenn Hopper : And so when I come into a company, you know, they'll reach out in our first conversation. It might just be, uh, we've been told to do AI. What can we do? And then so the whole process really is. Okay, first off, let's get a base level of training and understanding here where senior management, whoever is making the decisions and the frontline employees are going to use it. Let's, um, get some training on the tools, how they work, what you need to be aware of with data privacy, what you need to be aware of with the probabilistic versus deterministic nature. And let's have a usage policy. Let's have an approved list of tools. Let's have a way that we can vet other tools and bring all those in and manage sort of. That's the bottom up piece. But on the top down side it is okay. Where can we make systematic or systemic change? What processes do we need to look at? What's our data situation? And it's funny and I think that this is the reason that you see such terrible stats on, you know, projects that don't work. And I think it's not a failure of technology. It's a failure of scoping or understanding where technology can be applied. So a lot of times this is it's not sexy and fun, but just going through and looking at, okay, which department are we looking at here? Let's look at every digital process we do.
[00:09:57] Host 2: Glenn Hopper : Let's chase it A to Z every step that's in it. Find out where those bottlenecks and speed bumps are, where we swivel chairing data. Let's just look at your processes. And then, you know, you do that and uh, you find what processes are efficient and what aren't. And then you look at and maybe even prior to that, I should have probably addressed this first. Prior to that, you look at what's the company's strategy or what are our objectives for next year, for the next five years, you know, grow revenue, reduce costs, roll out new products, whatever it is. Like that company's strategy, just like if you were hiring employees or hiring additional managers, you want to hire people who need that strategy. So if you're picking AI projects, pick ones that meet that strategy. So, you know, what are some things that are going to help us achieve these goals? Think about that. Then look at the processes. And then it's as simple as we have multiple different matrices that we use. But it's okay. What is the complexity of this. What is the risk of this? What is the scalability of this or how often it's done.
[00:11:02] Host 2: Glenn Hopper : What's the frequency that we do it. And then you just become like a scoring matrix. Then it's that I'm an agile guy. So it's like find the easiest projects, the quick win kind of projects and knock those out first. Because once you get some of those successes, then it's easier to get the rest of the company to buy in and to be able to buckle down. They've understood it. They see the results from these pilot projects, then they have the patience and you can sort of manage the change around the bigger projects. And those bigger projects are the ones that are going to have the really big ROI. But it's just it's so much right now. So I think there's a lot of business in the consulting space around this for the next several years as the technology evolves and people get their understanding, but it's also there because it's so nascent. There's a lot of bad consulting and, you know, guidance that's going on around it right now. So it's a pretty interesting time. And we talked in the last episode we did together. We were fully at the peak of the hype cycle. Now I'm starting to see people jump off the edge into the trough of disillusionment at this point as well.
[00:12:05] Host 1: Paul Barnhurst: Yeah. So we've talked a little bit about the change management top down. You mentioned bottom up. I'm going to ask one question there. And then I really want to go into some demos and tools. Most people like get to the demos, maybe even switch the order. We'll see. But uh, the question I had is what? How do you manage the bottom up people? What should companies be doing? Like how tight should they be on their security? How concerned, you know, what are you advising companies on? The bottoms up.
[00:12:34] Host 2: Glenn Hopper : First piece of advice is shine that spotlight really bright. Say, hey, we're going to use AI. Hey, we're going to evaluate different tools. There's a lot going on in the space, but these are the things we need to be concerned about. I don't want to throw a name out there, but there was an Excel tool that was web based that I kind of liked, but every time I went back to it, they default switched off privacy settings so that it was using data to train the model. And I thought, I can't remember to go change this every time I come here. I canceled my subscription to it. I just so but whether nefarious or a bug or whatever, those are the kind of things that your employees need to know . Hey, did you know, maybe Claude, uh, doesn't automatically default to training the model on your data, but OpenAI does. But you can turn it off in two seconds. You just have to know it. So I think that from the bottom up perspective, I think it encourages experimentation. Uh, with guardrails. Talk about okay, we're going to look at different products, you know, raise your hand, submit a form. Whatever it is. Say we're going to evaluate. We'll evaluate on dummy data, get an understanding of it. Get sort of the feedback around that, and then it goes through a tech committee or the agenda or the generative AI committee or whatever it's called.
[00:13:49] Host 2: Glenn Hopper : And you check to see, is it, uh, you know, Soc2 type two compliant? Is it, uh, what is the usage policy and what are they doing with my data? Where is the data going? Is it a Chinese owned company or, you know, all the stuff that you need to look at and then it's vetted and then it can get out there. But if you power employees and give them a path to test this, that's great. But if you try to lock it down too much, people are going to find ways to be efficient and you're putting yourself at risk. And I know you can be draconian and you can say, well, my way or the highway, but that's not that's not the right approach to, you know, the crazy new technologies and advancements that are out there. So it's really about just having a place where it's safe to experiment. And that is because ultimately individuals increasing their productivity increases Company productivity. And then how do you use their time when they're not doing these monotonous tasks? And maybe it's, you know, reskilling, upskilling, repurposing, moving around, or maybe there are some jobs that aren't needed. But I don't think you even have to lead with what jobs are we going to eliminate? It's how can we make the jobs for the people who are here better? And how can we make our team members better at the jobs they do?
[00:15:01] Host 1: Paul Barnhurst: Got it. So, you know, some great advice there. I love the whole idea of experimenting. I think let's move into, as we mentioned at the beginning, two things we want to show. I think first Excel, then we're going to move to the agent I built and kind of get some of your thoughts. Let me share my screen here. And this is live. We didn't plan this ahead so we'll see how it goes. But yesterday I believe it was yesterday on Excel's 40th anniversary. Can you believe that? Glenn.
[00:15:32] Host 2: Glenn Hopper : Maybe because I sort of remember the predecessors before Excel.
[00:15:37] Host 1: Paul Barnhurst: At Netscape had its own.
[00:15:39] Host 2: Glenn Hopper : What was the other company that a Carol. What was Carol?
[00:15:42] Host 1: Paul Barnhurst: Carol was based here in Utah. It was out of Provo. They had a Carol office. They started with Carol, word. And honestly, Carol word was superior to Microsoft's product. It just couldn't compete with Microsoft. I mean, Microsoft had the bundling effect, which in the early days it wasn't the superior product in a lot of cases. And I think there are some parallels to that. And the AI space, you know, VHS versus beta, the list goes on and on. Often the superior product does not win. I mean, the iPod was not the superior technical product. Now, from a form and function, it was fabulous and esthetic, but it was not the best technical product.
[00:16:25] Host 2: Glenn Hopper : Was that the Microsoft one? Is that the Zune?
[00:16:27] Host 1: Paul Barnhurst: Yeah, Zune was the Microsoft one. My brother bought a Zune. Oh, Microsoft is 40 years old. They just announced their new agent. What the announcement said is, hey, we have an agent. It's online only in Excel. Does not work in desktop Excel and you only for certain accounts. So I downloaded it yesterday. You're going to see on my screen and let me enlarge this. I'm going to enlarge this a fair amount. And why did I enlarge that. It got rid of my toolbar. That's what I wanted to show bigger. So why did my toolbar go away.
[00:17:03] Host 2: Glenn Hopper : Excel online you got.
[00:17:06] Host 1: Paul Barnhurst: All right toolbar. Oh, interesting. As soon as I go to a certain size, it goes away and doesn't. There? I came back so I can enlarge, only I have to go back to zero. I can enlarge it by 10%. The things we learn with Excel online. So I finally did build what I asked it to build. So we'll look at it here in a minute. But what I was going to say is through Excel Labs. So you can see on the right hand screen I have Excel Labs open. I'm going to reopen it. This will stay on the screen. So we'll look at it in a minute. But if you let me close it so I get a completely brand new one. If you open Excel Labs you now have. So they have this advanced formula environment from before labs. Dot generative AI function. Use a custom function to send prompts to have the Python editor. This is actually pretty cool. The advanced formula editor environment. It makes it really easy to write long formulas and see them because right, we all know the toolbar. You can't write very long formulas. You can see it's taking a formula from within my spreadsheet and showing all the details. And so you can break them out much like a modern editor and get intelligence and all those things.
[00:18:30] Host 1: Paul Barnhurst: So Excel Labs is pretty good. It's available in both Desktop and web. So now what I need to do is I'm going to go back. Why can't I go to the feature gallery? I don't want this. I want the to bring back the agent. More options. Okay? It's only telling me I can resize. That is not helpful. So you're seeing live here. These are reasons these are kind of beta and not built in. Let's come over here. So here if I go to the feature gallery you have this new agent mode. Agent mode helps you build and edit workbooks automatically. It's part of Microsoft's Frontier program, offering early access to AI innovations. Enabling agent mode allows direct workbook changes used with care, especially on shared or sensitive files. It requires a supported Microsoft 365 license. Yesterday I was not supported. Today I supposedly am. Go figure. So that's the new agent. But what I was going to also say is there's a ton of other agents you can get in Excel. Everything I have here on my toolbar, almost all of these are agents. The finance preview is AI, not necessarily an agent, but hasn't done much development with it. You can do some reconciliation.
[00:19:48] Host 1: Paul Barnhurst: Everything they got. Copilot Trace Lite is a kind of an AI superhuman agent type tool. Melder Ray trace Lite shows up twice. Not sure why Melder shows up. It shows up really weird online. Elcar is Fontex is to help with auditing and other tasks. Shortcut. And so we're seeing a ton of them. So what I did we'll get your thoughts. I asked two questions. I asked it to build a waterfall graph. I waited like ten minutes. It never built it. Not sure why. Kind of bombed out so that one didn't work again. Shows the nature of beta. Probably how many people are trying to use it, right? All those type of things. But I asked. It's just too sad. I said, hey, build me a basic DCF model. Here's what I got. So it gave me all the assumptions in one area. You can see okay, what's my current revenue revenue growth EBITDA margin tax rate. And then here it built the projections and all it did is gave me revenue, EBITDA margin, Ebit tax rate, Nopat, DNA CapEx change in working capital discount. I mean very. Now I gave it almost no instructions. You know, definitely not the way any human would build it.
[00:21:04] Host 2: Glenn Hopper : I don't know. I mean, if you were doing, you know, analyzing a portfolio or something and you just needed some key info, you. I mean, getting a quick DCF like that, I could see value in that because there's so many things that you just use that are like back of the napkin tools or even like if I was budgeting and trying to do a, you know, a loan, uh, you know, overtime, I'd do the Am table or whatever. And it's like, ah, where's my stupid Am table template? So this could, you know, I could see it saving time with stuff like that.
[00:21:31] Host 1: Paul Barnhurst:Ever feel like you go to market teams and finance speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Meet QFlow.ai, the strategic finance platform purpose-built to solve the toughest part of planning and analysis of B2B revenue. Q flow quickly integrates key data from your go-to-market stack and accounting platform, then handles all the data prep and normalization. Under the hood, it automatically assembles your go-to-market stats, makes segmented scenario planning a breeze, and closes the planning loop. Create air-tight alignment, improve decision latency, and ensure accountability across the team.
[00:22:39] Host 1: Paul Barnhurst: Yeah, I mean, I guess if you're doing quick ratios, but what I'm saying is this isn't typically what people build, right? This isn't how you think about it. It's impressive because I gave it almost no instructions. Right. We could say, okay, the EBITDA margin is really simple. That's right. You can see it used the right formula. You can see here it did an xlookup to that. Then it used again right formula. It did a revenue times an Xlookup. So the way it's written it as far as the formulas is, you know very well structured easily makes sense.
[00:23:10] Host 1: Paul Barnhurst: You can see the tables, they've named them. So that's good. You know, I did the right math for the present value. But what I mean is like we have no idea what expenses are typically with the DCF and projections. I want to have a general idea, you know, okay. It did five years, but you know your revenue. It made an assumption about what zero revenue is. Given it. I gave it no instructions. This is pretty good. The formulas are really good. The logic of doing assumptions, projections, breaking it out very clearly. I would put it on separate sheets, but the fact that it did some structure to it good for that. And now we have the valuation summary. So we can see some of the present value of the fiscal free cash flow. So total of this you can see again you know the simple formula okay. What's the terminal value. So you can see they're used a pretty pretty complex formula to calculate the terminal value. Interesting. It didn't index times divided by an xlookup minus another Xlookup, a pretty long formula. I would have to think about what it's even done to validate it. I think it wouldn't. It would take me a minute.
[00:24:29] Host 2: Glenn Hopper : It's so funny when it does that.
[00:24:31] Host 1: Paul Barnhurst: Use the 2.5% terminal growth. So it's saying, hey, take the free cash. But then it used a rose here. So you're going to say some thoughts on that formula.
[00:24:41] Host 2: Glenn Hopper : That's the funny thing though. It's like we don't know you could take programming whatever and understand the logic of writing Python or even of writing complex Excel formulas. But I wonder if you or I were tasked with training a model on all these different types of spreadsheets that are common, you know, three statement model, DCF, amp table, whatever it is, we would just train it on a bunch of models. And I bet in none of those models would you find a formula close to this? We would do the way that we've the simplest path that makes sense in sort of human logic. So my only point on this was I wonder why in robot logic it made sense to do the formula this way and how it came to it, because it seems convoluted to us. But isn't it going to choose the path of least resistance? How did it come up with this? The choices it made. And it's kind of like when you ask AI they've been doing this for years before the transformer model and all that, but you would ask AI to come up with novel solutions to engineering problems. So it comes up with things that don't really match the way we think. And that's sort of what I'm feeling here, is this is secret robot code that obviously we can decipher it. We have the Rosetta Stone. We know what the formula is doing, but why did it pick this instead of something easier?
[00:25:59] Host 1: Paul Barnhurst: And the other things it did? Why did it say xlookup and right? Whack, right? We're always taught not to code hard, so it's breaking the fundamentals of how we model. And you would think if it was given it's read plenty of blogs and instructions in its training, no question. Not just being fed Excel files, but I'm sure the model has read all kinds of things on the internet. Nowhere does it say, hey, you should hard code your lookup value. So why did it choose to do that?
[00:26:28] Host 2: Glenn Hopper : I would love to know. You know, it's funny, like if I'm looking at just token generation and basically what an LLM does that makes cognitive sense to us of how it comes up with the next word. But how's it coming up with the next form or even how it writes code because it's mimicking other code. This isn't mimicking, you know, to you and I both. It's not mimicking best practices or the codes that it would have seen most often. So what is the logic that it's going through that's generating this? That's what I'm really curious about, that I don't know the answer to that.
[00:27:03] Host 1: Paul Barnhurst: Yeah. It is interesting. You know, as I break it apart, you can see, okay, it didn't index. I put the table, but instead of a match here it did a rose formula times something. And you know so it's really interesting how it's doing it all. It's just not very atypical. It's advanced. It works. I screwed something up there. There we go. We got numbers back. You know, everything else is simple formulas. Simple xlookup. Fine. It did net debt. Why did it not reference net debt here? I know it says less net debt, but you could have broke up the columns. You could have taught it. Hey, just take the word that you need so you know those type of things. Shares outstanding. Again, why is it not just referencing the word. So then it has its graph down here where I guess it's showing free cash flow and I'm not sure what else. Oh, it's put the years in. This is a series again. And this graph is hideous. It's taking the default. But that's an ugly graph. I'm sorry I would never present that to a manager. So I think what it highlights, and this is what I'm seeing along the way is amazing. If you're patient you work with these tools. You can start just working with the tools in many cases, I think. But there's going to be a lot of quirks, a lot of learning. You gotta validate everything. So what we've been saying all along, but it's moving so fast on the Excel front right now. Any last thoughts on this? Before we go to my AI agent, I know we, you know, want to kind of talk a little bit about that before we wrap up. I think you'll find it interesting.
[00:28:45] Host 2: Glenn Hopper : So just real quick, I guess my last thoughts were, I mean, you and I have talked about it a lot. We knew Microsoft was going to get there, and I'm not saying they're all the way there with this. However, there have been this little gap as we waited for Microsoft to kind of get there. There have been all these other startups that sprang up, and now it's kind of like if open AI pivots, if Microsoft pivots, they can quash all these other add ins out there. If it's the native functionality is there. So it's kind of this weird arms race. I'd like to do a bake off and see side by side, do the exact same prompt and some of these different Microsoft add ins.
[00:29:22] Host 1: Paul Barnhurst: We're testing each one as an individual episode for future, not for future finance for financial modelers corner with Inchinor and Giles Mill. So you'll be seeing a lot of Bake Off stuff we're adding. I just told them we need to add agent. I literally messaged them as we were looking at this saying, we have to include this now. Things have changed and that's what's so hard of trying to do a series and educate people is the stuff moves so fast and so, but it will be a fun series. I think it will help a lot of people. Excited for agent. I tried running one. I tried to have a build a waterfall graph. I kept bombing out, never got there. The next thing I want to show is I have an AI agent I built, and so what this tool does is allows me to build a profile. I can set up an initial message, say what suggested questions are, and even kind of, hey, give it my purpose. Write custom instructions. Very similar to if you're building an agent in ChatGPT, What should it message when it can't give an answer? And this is interesting here: how long will the response to users will be intelligent, concise, explanatory or are custom creativity? How strictly should your mind adhere to your uploaded content? Now, should I only do my content? Should I let it infer from my content? Or should I let it be creative and do basically whatever it wants? Then you tell it your speaking style.
[00:30:43] Host 1: Paul Barnhurst: You say, hey, should I show citations? It uses recency bias. I can do voice. They can do chat. What's interesting is they give me every single conversation. And this is beta. And so I can see what questions people will ask. And so here's one that was kind of interesting. What are the top technical skills someone needs to be successful at FP&A. Financial statements. Fluency. Okay I can agree with that. Excel mastery financial modeling data skills BI and visualization. Forecasting and budgeting mechanics. System savvy? Definitely not the list I would give if somebody asked me, but it covers most of the things and then it gives citations. So that's how the questions kind of I get to see them. And then they went on and asked power BI basic forecasting, good system savvy, good financial. I don't know why they asked it this way. Fluency. Excellent. Power pivot. And oh it said if you want tell me your current level and tools and I'll map a 60 day upskilling plan.
[00:31:46] Host 2: Glenn Hopper : Ah yeah. Yeah.
[00:31:48] Host 1: Paul Barnhurst: And so they went through and rated themselves Excel modeling average max a 60 day upskill plan. So let's see how it did. Love it. Here's the 60 day plan. Remember this is supposed to be in my style. Can't picture myself talking that way. And I sometimes wonder where it gets its information. But it's really fascinating to watch a 60 day plan focused on Excel modeling and modern Excel with power BI tie in. So I'm reading this. If you're listening on audio, you can always go turn it on on video. You're already strong on forecasting systems and financial fluency. So we'll build speed and scale. So week 1 to 2 modeling foundation and speed. You know, rebuild a core three statement model from scratch. Daily drills keyboard shortcuts Xlookup versus index match Sumifs. And it gives target 30 minutes a day deliverable three statement model with a three case scenario and a working checks tab. Now as I read that, what person could take that and actually do that without more. Right. How helpful is that?
[00:32:49] Host 2: Glenn Hopper : I mean, that's the problem with trying to build. You've got all this data there. There's so much to sift through. And you and I have talked about this before, when you when you talked about all the data I was, I said, and this is not something that your typical finance person is going to do. This is fairly complex, uh, database architecture, but not just a rag retrieval augmented generation, but graph rag that lets you sort and basically it just think of different tags and codes and ways of sorting and having all this information. But Bragg is very complicated. And it's the early promise was, oh, I will just read it and it'll pick the best answer. But you've got to and it's unstructured and all that. But way to get the best results is to, on one level, tie rag together through a network. And so you can sort of route things over the network in a different way to find the connectors and the pieces that go together. And then the other is to have a schematic and a sort of a breakdown, and to put some structure on the data so that it and then an orchestrator that is directing the rag which direction to go based on the question, and then you've got your system prompts and then the prompt that comes from the user. It's a lot of variables to try to cover. So I mean this these results make sense to me.
[00:34:03] Host 1: Paul Barnhurst: Yeah. And so you can see and then at the bottom it says if you want I can turn it into a simple checklist. And I could even though this is someone else's conversation, I could say yes. And I think it will do it. So, you know, let's look at some others just so we can see the questions people are asking. And you can see it often tries to give it a plan, but it just struggles of where to pull coherently, how to put together. Which gets back to your rag because it just has so much data. It's just trying to assume what should come next. And it's also inferring a lot because it's not like I strictly say, what they should do. It's like this person, I'm an accounting professional looking to offer forecasting to clients. What is the best way to get the training needed to transition? I'd like to spend less than $500. Good luck.
[00:34:48] Host 2: Glenn Hopper : Yeah.
[00:34:50] Host 1: Paul Barnhurst: So what to learn? And it says, you know, some budget friendly picks. Take a CFI self-study plan whilst perhaps a modeling program. So it offers some of my website plans on how to practice fast. And then they said, hey, we typically do clients, contractors and home services. Verses and so talks about what they should focus on. But I like this. Leads win rate average ticket jobs per tech per day. You know it's like billable okay. You know pass parts pass through margin. It makes sense. It's helpful. But it's odd if that makes sense to me. You know what I mean versus what you would kind of typically see. Because that's the way AI is going to take this data.
[00:35:34] Host 2: Glenn Hopper : Funny, every time I get on a meeting like I'm meeting a podcast, in our case, I'm immediately trying to, you know, make the sausage. However, I think there's a fine tuning layer you could do on this based on if you put in 100, the more you had, the better. But questions with the right kind of answer and then you fine tune on that. It would help structure the way that it comes back. And I think that even without doing Graf Rag in a more complicated rag setup, with some fine tuning of the model itself, you could direct, correct.
[00:36:08] Host 1: Paul Barnhurst: Like I can go in just so you can see. I think people will find this interesting. We'll go through a few more questions, but I can improve the reply for every single question I can go through and change them to train it so I can start seeing the answers. Then over time it's going to get better and better, right? I just haven't done that yet because it's time consuming to do that. I can also tell it for certain questions. Hey, if you get this question, answer it this way. And I could also load. I'm sure I could load some more stuff, but uh, let's look at a few others. A big question a lot of people are asking is how do I make the career transition? What certificate is right for me? What training programs? What is the difference between that? When someone asked about an add in and a spreadsheet native for FP&A software, it actually gave a really good answer there. I know the guy who asked it. It asked for comparisons of programs. So you can see a lot of people are asking a lot about training, but also how do you see the integration of AI impacting financial planning? How do you turn financial data into compelling business stories that drive action? This person asked how I make the career transition? I'm a senior accountant at a healthcare company.
[00:37:15] Host 1: Paul Barnhurst: What are the top technical skills? I'm probably strongest at Excel Mastery. They tell me where you're best. Again, it mapped out. Here's some things to focus on. Yes, please. And so you know it's interesting. Part of it is just interesting to see what people are asking. I love this one. What questions should I not ask a candidate in an interview? Skip questions about protected classes, family status, health, age, proxies, religion, financial status, prior pay, history where restricted. So basically said just don't do illegal stuff. And the person's like all right thanks. That wasn't much help. What is the number one criteria to look for in hiring an Fpna analyst? We'll see what you think. I actually like this answer. Curiosity paired with communication. Are they relentlessly asking why? I know we have to go here in a minute. We'll wrap up our interview, understand the mechanics, translate numbers. Technical chops are table stakes. Influence comes from, you know, curious thinking, crisp storytelling. Pretty decent answer. I mean, not exactly how I'd answer it, but any takeaways in anything that surprise you in the questions? We're getting another one. How to make the transition, how to build a revenue budget for a certain business. And they wanted it to build it. That was an interesting one to read.
[00:38:29] Host 2: Glenn Hopper : No, I think the questions make sense. And though by having all these questions, you really could you could build your fine tuning data set. And even without a more complex Rag architecture, you could make the model answer better by using a fine tuned GPT or whatever. Gpt, in the true sense of the word, not OpenAI, calls the personalized things.
[00:38:53] Host 1: Paul Barnhurst: The first thing I do is go through and revise all these, and then start loading some data for common questions, things like that. And so that's the plan to spend a day. But I wasn't doing this for our audience. Say how do I fix it? I think that's great, though. You're seeing as you build your own agents how we think about it, how Glenn thinks about him because he's the expert here. But also, what are questions people would ask of an agent? Are there other finance agents people are really concerned about? How do I upskill myself? How do I become better at training? How do I make that transition into fpna? If they're an accountant and, you know, a lot of great resources out there, this has kind of been a different episode. We've covered a number of different things, but we just wanted to try something different, see if you like it. So if you did like this episode, please email me. The bearded wonder if you did not like the episode, please email Glen.
[00:39:44] Host 2: Glenn Hopper : Yeah, because I'm really good at dealing with complaints, so that's please call me.
[00:39:50] Host 1: Paul Barnhurst: We'll go ahead and wrap up. But Glen is our great complaint person. Any last thoughts before we go? Glen?
[00:39:55] Host 2: Glenn Hopper : The same as it is kind of every week just experimenting with stuff. And I love your trying building this agent right now. And I think there's a lot of value in it. And I think for our listeners, we're not a software platform. Or maybe this is my AI talking, I don't know. For our listeners, I mean, it's the same thing. Experiment. Go out there, push the boundaries, see what you can do, find ways to use it, because that's with all the questions you're getting there. That is one way you're going to get ahead is if you're better at using these tools, you're going to stand out in the new future.
[00:40:22] Host 1: Paul Barnhurst: Thank you, everybody for joining us. We'd love a rating and review. We'd love to hear from you. If you have a great guest, let us know. We're always looking for great guests. And until next time, Glenn and I are signing off.
Host 1: Paul Barnhurst: Thanks for listening to the Future Finance Show. And thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice, and may your robot overlords be with you.