How Finance Leaders Can Stop AI Failure and Adopt Augmented Intelligence with John Thomas

In this episode of Future Finance, hosts Paul Barnhurst and Glenn Hopper explore how leaders can cut through the hype around artificial intelligence and focus on real-world impact. The conversation dives into why so many AI initiatives fail, how cognitive biases affect AI adoption, and why finance professionals must learn to ask better questions before deploying models.

John Thomas is the Founder and CEO of the Global Institute of Data Science (GIDS), a consulting and professional development organization focused on helping organizations successfully implement AI and data science initiatives. He serves as a Fractional Chief AI Officer for Fortune 500 companies and teaches AI and machine learning courses at Caltech CTME and UC San Diego Extended Studies.

In this episode, you will discover:

  • Why 85% of AI projects fail and how to avoid 

  • The difference between AI hype and real implementation

  • How augmented intelligence improves human decision-making

  • Why asking the right questions about AI models matters most

  • How AI can help with risk analysis and financial decision-making

This episode highlights that successful AI adoption is not about chasing the latest technology trends but about asking better questions, understanding assumptions, and focusing on real business problems. As AI continues to evolve, finance leaders who combine human judgment with intelligent systems will be best positioned to turn AI from hype into measurable results.

Follow John:
GIDS: https://gidsco.substack.com

LinkedIn: https://www.linkedin.com/in/john-thomas-foxworthy-m-s-data-science-1718073/

Future Alpha Event: https://www.alphaevents.com/events-futurealphaglobal/agenda-page/filter?_gl=1*1j0347f*[…]ovIhoCWOYQAvD_BwE&gbraid=0AAAAAomEzrlLzh-epjUJjbfXNnASlChga

Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii

Follow Paul:
LinkedIn: https://www.linkedin.com/in/thefpandaguy

Follow QFlow.AI:
Website: https://bit.ly/4i1Ekjg

Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.

Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.

In Today’s Episode:
[00:00] – Trailer
[02:05] – Meet John Thomas
[04:11] – Augmented intelligence explained
[11:21] – Global Institute of Data Science
[15:18] – Why AI projects fail
[21:29] – Understanding AI models
[24:45] – AI in portfolio risk analysis
[30:16] – Best advice for finance leaders
[32:15] – Rapid-fire questions & wrap-up

Full Show Transcript:

Co-Host: Glenn Hopper (01:33):

Welcome to another episode of Future Finance. I am Glenn Hopper, and with me, as always, is my esteemed colleague, the FP&A guy himself, Mr. Paul Barnhurst. Paul, how are you doing?

Host: Paul Barnhurst (01:44):

Doing great. Excited for another episode.

Co-Host: Glenn Hopper (01:48):

Our guest today is John Thomas Foxworthy. John Thomas is Founder and CEO of the Global Institute of Data Science (GIDS), officially sponsored by Carnegie Mellon University's Heinz College, and author of The Augmented Intelligence Revolution: No-Code AI for Business Leaders. He serves as a Fractional Chief AI Officer for Fortune 500 companies and teaches AI/ML courses at Caltech CTME and UC San Diego Extended Studies. With over 20 years of experience spanning quantitative finance at bulge-bracket banks, hedge funds, and enterprise AI strategy consulting, John specializes in closing the gap between AI's promise and real-world financial performance. His proprietary A.I.R. Framework directly addresses the documented 85% AI project failure rate by targeting the cognitive biases that derail implementation. John holds an MS in Data Science from Northwestern University and a BA in Economics with Econometrics from UCLA.

Guest: John Thomas  (02:58):

Thank you, Glenn. I'm glad to be here.

Host: Paul Barnhurst (02:59):

As you can see, John, Glenn and I like to have fun. We keep it light here.

Co-Host: Glenn Hopper (03:04):

That's

Guest: John Thomas  (03:04):

Cool.

Co-Host: Glenn Hopper (03:04):

Yeah, we're having fun. Our audience and guests, who knows, but a lot to cover here. And we did, as I said, we rambled on for 15 minutes about food storage and everything else, hairdos from eighties hairbands and all that before we got on. So now we're getting into the actual content of the show. And as a fellow author, I always love having guests come on who are writing or have written books and the title of yours very intriguing, The Minted Intelligence Revolution, how Leaders Win in the AI Century, and I know it's with the editor now. I want to hear about the upcoming publication on that, but I love augmented intelligence because there's the concern right now about AI replacing us and what that means. And surely there are jobs that can be replaced by ai, but for a lot of us, especially I think in finance, augmented intelligence is to my mind kind of the wave of the future. So without me rambling anymore, tell us about the book and the significance of augmented intelligence versus this whole idea that AI is going to replace us.

Guest: John Thomas  (04:11):

Yeah, thanks. So the augmented intelligence revolution is about how we need to stop thinking about artificial intelligence as something that will replace us and to start thinking about how AI is something that will augment us. So regardless of where we are in the AI cycle, is it a bubble, not a bubble? Will it burst or not? Artificial intelligence will be with us for the rest of the century. I deliberately chose the word augmented over artificial because the real goal of AI is about collaboration between humans and AI working together for the rest of the century. So this is a matter of better decision-making that can be enhanced or amplified. This is about how have humans control artificial intelligence at the same time as a constraint. And that drives the philosophy of everything that's in the book from everywhere to how I define the problem, to all the way to the methodology that's used to solve that problem.

Co-Host: Glenn Hopper (05:05):

And are you to your mind and to the book, whether it's what they always say, the AI that we have right now is the worst it's ever going to be. So whether it's current form of AI or true a GI down the road, does the book take a stand? Does it matter where we are on that curve there? As far as how we're using it,

Guest: John Thomas  (05:26):

I think it depends how much energy we have and money we have to create a GI. So there's some real massive constraints to get to a GI or a SI. First of all, there's no consensus on the definition of artificial and general intelligence or artificial super intelligence. The experts who've been doing this for twice the number of decades that I've been doing it, they don't agree with each other. For example, Jan Koon does not agree with the term artificial general intelligence. He calls it something else. He disagrees with Jeffrey Hinton who won the Nobel Prize in artificial intelligence in 2024, the Nobel Prize in 2024. So we don't have a definition. We say it's coming and we don't know when. So I would call this a distraction and there's really things that you want to focus on here and now rather than trying to pretend that something's going to may or may not happen in the future, aside from energy constraints and also financial constraints because artificial intelligence can be quite expensive. There's a lot of areas that are completely ignored in artificial intelligence, especially with unstructured data, social media files, user log files, a bunch of using graph data science and other areas which are really need to be monetized in quite a big way. So there's that, and that's the way I would kind of look at it. I think it's a matter of here now rather than what's happening in the future or may not happen in the future.

Host: Paul Barnhurst (06:48):

Makes a lot of sense to focus book on the here and now. When if it happens, we may have to adjust things, but for this information there's a lot of constraints.

Guest: John Thomas  (06:58):

I would also kick back on people who are kind of doom or boom, let me pick these two groups. So what happens to a lot of people, I kind of got into artificial intelligence 20 years ago in statistics and econometrics. There's a lot of bad sampling with certain people. There's certain people who are, they already have a bias. They're like, this thing's not going to work. I'm just going to find an example where it doesn't work and I'm going to talk about it. And then there's other people do the exact opposite where nothing is bad. So you really don't want to find a balance. You want to have some nice surveys that you can find. That survey a hundred or a thousand people who have a variety of opinions and focus on the common tenancy of that sampling rather than one or two people who have very strong opinions. And that might construe you in the wrong direction. With artificial intelligence, Glenn,

Host: Paul Barnhurst (07:47):

It's almost like our guest understands how statistics work.

Co-Host: Glenn Hopper (07:51):

Yeah, honestly, we talk about this all the time because if you think, I mean I'm preaching to the choir here, but about the foundation of machine learning and statistics before that, and you see in the present era all these AI experts who are basically the equivalent, I'm really good at using Google, so I'm now an internet expert. And that's what I know. Paul, you had a question about kids too, but so I don't want to jump all over that. That's an important distinction. That's an important distinction about using ai. There's one thing to be a power user of ag agentic ai, but then to really get the fundamentals behind it, that's where the true experts live. Like our guest,

Guest: John Thomas  (08:34):

Lemme kind of help your audience with AgTech. So there's a long history of changing labels of the exact same product or service. So machine learning, before I get into inject, let me explain, machine learning. Machine learning comes from statistical learning. They drop the word statistical because it's not a marketing term. Machine is basically the software programme and learning means estimation. So it's a software programme that estimates, that's what machine learning means. The word learning is not the same thing as cognitive learning. When you go to school and you're learning things from a book for example, that's not the same thing as the word learning and machine learning, the word learning, you just need to translate it into the word estimation. So it's a machine estimation that's what machine learning actually means. I think also in certain areas with something that is not a standardised product, you've got a lot of noise with different types of people trying to claim that they're an expert when they're not.

(09:28):

So a big problem within the machine learning communities, we have a lot of overconfident, amateurs. If you try to take this and look at accountants or lawyers and doctors, they don't have that problem because their profession is regulated and you kind of have to filter through all the noise and get to the signal, which can be frustrating and confusing at the same time, but once you find someone to listen to, that's really where we want to go. And that's also why I kind of created this book to help people with the correct terms. There's actually a section on it where I talk about the history of the labels and let me get into agent. So Ag agentic is the same two, same thing as artistic. So agency and then agentic, which is the same as artist. Artistry and artistic agent and agency are nouns. One is countable, one is not.

(10:20):

You have four agents, you have two agents, and then agency is a countless number of agents. Same thing with artist and artistry. So ag agentic, what that means, it's referring to the quality of the nouns that existed. So ag agentic is referring to a agent or Ag agentic is referring to agency. So that's where we're AG agentic refers to. It's the adjective or the quality of being an agent, just like an artist who's artistic, the quality of them being an artist, what that means. Originally, the origin story of the word agentic comes entirely 100% from reinforcement learning. Now it's a little bit of everything. It's very much overused. I'm sure in 2027 we'll come up with another word that will replace ag agentic and confuse a lot of people. Again,

Co-Host: Glenn Hopper (11:10):

Regular listeners of the show know I have my rant that I do about everything being called an agent out there right now when they're not ag agentic. Yeah,

Guest: John Thomas  (11:20):

Very true.

Host: Paul Barnhurst (11:21):

It might be the word of the year for 2026 at the rate we're going, but I digress. So I would love to ask a little bit, it was mentioned in your bio, you're the CEO of the Global Institute of Data Science. Talk a little bit about that, what the mission is, maybe the founding story would love to know a little bit more about, I know that's sponsored by Carnegie Mellon, so tell us a little bit about that.

Guest: John Thomas  (11:47):

Sure. So the Global Institute of Data Science, we provide coaching, consulting, and certification services. We're career development, professional development and consulting company. The reason why we used Global was that I, after Route more than 10 years ago, started doing machine learning consulting on my own and I got a lot of non-US clients along the way. So that's why I created the name Global Institute Data Science. There used to be a pretty sizable amount of offshoring that's involved with technology development teams. I predict that's going to be changing drastically with reshoring because of cloud code and cursor and so on. But we use the word institute as well as to kind of create our three major channels, which we provide services to private corporations, government entities, nonprofit executives, and also universities. So I do write curriculum variety universities right now, and Gentech is really high on the list.

(12:45):

I have to tell 'em like we need to put the word reinforcement learning next to it, otherwise people will get very confused very quickly. So that's the Global Institute of Data Science. I started doing, originally, I think 2005, I started becoming an econometrician with statistical forecasting and I was basically a statistical learning consultant as well right after that. And then my title changed to data science. The origin of the word data science actually comes from William S. Cleveland in 2001. It took a while to get accepted into the private corporations area. Data scientists really started around 2008, 2009. That's when my title started to change as well. So not only did the product labels change, but also career titles change. I've seen AI engineer now added to other areas. So that's the Global Institute of Data Science with consulting, coaching and certification.

Co-Host: Glenn Hopper (13:38):

And I love

(13:40):

Your progress through that though. Going back to what I said earlier, because the foundations of everything we're doing today, I think about when I wrote my first book on AI and machine learning in finance, that was back during COVID and nobody cared or thought about it because the barrier to entry was so high because if you had to write Python and understand statistical modelling and all that. So it's for OG folks who've been doing this a while, the whole AI boom means something differently. And I'm thinking from your perspective, I mean we talk all the time about the Gartner hype cycle, and so generative AI has now crested the peak of inflated expectations and is coming down the Gartner hype cycle into the trough of disillusionment, agentic ai, very peak of the cycle right now. So there's been all this marketing hype and it has, I mean obviously it's worthy, the current state of AI is worthy of the attention it's getting, but at the same time, there's that MIT study last year that everybody talks about that 95% of projects failed to reach ROI and say what you will about the study. I mean, that is the experience that a lot of people are having and we've got our own theories on the show about why that is. But I know you focus on sort of closing the gap between expectations and reality. Could you talk through that gap and how you choose the projects and how you take someone who wants you to come in and sprinkle some AI on whatever process or whatever it is they're doing, and then closing that gap and giving them something that is actually meaningful as a delivery that does what it's supposed to?

Guest: John Thomas  (15:18):

Yeah, it is a bit difficult even for smart and capable people to kind of delink themselves from the hype. I mean, we're all human beings. We get probably something in our phone or some type of advertisement, go to a conference and it might sway us in the wrong direction and the powers of persuasions that exist. So the first thing we do at the Global Institute of Data Science is feasibility studies. And we also try to slow down the process. So a lot of people just react to these vendor demos or they react to something that they've read or something that influenced them. And we focus primarily with that within feasibility to kind of build their strategy and also work after strategy into governance and then into integration with feasibility. We do quite a number of things. We do organisational readiness, we do data maturity, data infrastructure maturity.

(16:10):

We also do some econometric modelling. In the beginning, you've got this many people, okay, you want to downsize, you want to decrease it to this many people because you're using cloud code or the other way around. We're growing, our marketing team is growing as well. If we add two people with this many accounts, is that feasible? That's an optimization question. So we do a lot of feasibility studies at first because too many people are, they're really solution jumping. They really need to have let the problem pick the process is my advice. I don't know, there's some people who are quite fearful of AI and they tend to do the best versus the ones who are overly optimistic. And that type of cognitive biases that exist in the beginning is an assessment we do quite often. It could be a sunk cost fallacy with one of our clients.

(16:57):

They've already spent something, it didn't really work. Now they've brought me on and okay, what's happening? Or the other way around. We have a couple of departments who really like each other and want to do artificial intelligence, but most of the firm hates it. And the question is, how do you get that into integration? How do you get that? And we work with clients with that. So it's about the people, the processes and the technology. That sounds complicated, but with my background with quantitative social science, it's relatively easy. A lot of people with even a PhD, I don't have a PhD, let's say in physics, they have high cognitive abilities, but they're not trained with human beings. They don't get a grade when they get in front of a class and do a speech like I had to do when I was 20 years old in econometrics, that's a different thing compared to somebody else. So it's really about my background and origin and how it kind of fits to the services I provide to the clients.

Host: Paul Barnhurst (17:58):

Ever feel like you go to market teams and finance and speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. MeetQFlow.ai, the strategic finance platform purpose-built to solve the toughest part of planning and analysis of B2B revenue. Q flow quickly integrates key data from your go-to-market stack and accounting platform, then handles all the data prep and normalization. Under the hood, it automatically assembles your go-to-market stats, makes segmented scenario planning a breeze, and closes the planning loop. Create air-tight alignment, improve decision latency, and ensure accountability across the team.

Co-Host: Glenn Hopper (19:06):

What you're seeing really it mirrors what I'm seeing in the market too, and this is playing out thousands or a hundred thousands of times per day at companies around the world where everybody's trying to grasp what's going on with ai. And so it always starts with, okay, let's level set expectations here and what's our comfort level? I'm speaking to generative AI in particular. What's our comfort level with using it? What tools can we use? What data can go in? How are we data governance, compliance, everything that goes around that. Then the training, to your point, you have these experts out there. They may be experts in whatever their domain is, but they're not experts in ai. So you have to kind of get everybody on the same page and we all see the writing on the wall with what AI is going to do, but there's a lot of disagreement and levels of, it's that change management between the fearful set and the overly optimistic set and trying to find something that you can deliver in the middle that makes everyone happy.

Guest: John Thomas  (20:03):

Well said. I'd have to also say that there's a lot of data silos and there's a lot of legacy systems and legacy people that really need to, and it's in every industry. It's not just finance. There's quite a bit of of everything. People are kind of set in their ways, but at the same time, there's also the exact opposite. There's some companies that have pretty much a monopolistic positioning or istic positioning that they have where you can develop machine learning and ai, but it's not going to move the needle for your company. It's not going to increase your share price. You're not going to get more private equity out of this. You're just the elephant in the jungle and I'm sure you'll have no problem walking through it, so you really want to focus on something else else. Outside of financial trading, I think there's a lot of ignored areas that exists within finance, which is using agen AI like Wealthfront did with the roboadvisors to automate things because a lot of the things that human beings do when they sit down, not actually remembering all the conversations they had in the past five years, that's too much for human beings.

(21:10):

In addition, there's also some great anomaly detection processes that have come out in the past couple of years that are relatively new to help with risk analysis for auto sample scenarios. So it's complicated, but it has structure and the more time you spend time with it, the more you understand it

Host: Paul Barnhurst (21:29):

Makes a lot of sense. The more you spend time with it, the things that are difficult, take time.

Guest: John Thomas  (21:35):

Yeah, well, there's a lot of wrong labels. So this is what I kind advise people. Instead of focusing on a gentech or machine learning, just forget the labels. What does it do? There's basically three frameworks within machine learning, unsupervised learning, supervised learning, reinforcement learning. So unsupervised learning is descriptive, supervised learning is predictive, and then reinforcement learning is recommending. So the question you want to ask is what does it describe? What does it predict or what does it recommend or is it doing all of those things? Because a large language machine does all of those things and then there's extensions of those, which is natural language processing and deep learning, and those are just additional layers or different data sets of doing exactly the same question. So if you simply ask what does it describe or what does it predict or recommend, that'll get you way ahead than people else rather than listening to some marketing material.

Host: Paul Barnhurst (22:31):

Yeah, learning beats to marketing material usually a lot. Hike is what I've found.

Guest: John Thomas  (22:38):

Yes. Well, hopefully if we get a correction in the market, I can say that out loud, it'll separate the men and women from the boys and girls. Between 2000 and 2008, there were a lot of competitors in the e-commerce space. It wasn't clear that Amazon would dominate, but by 2009, 2010, it was a clear winner. So that's really an area also that I would provide additional services at the Global Institute of Data Science is competition analysis. What is your market share in this industry specifically? Do you want to go, do you want to acquire, do you want to divest? What would you like to do and what would be that impact? And what is that equation? So that's also additional service that I provided because it justifies how certain companies do really well. A good example of this, probably one of the best growth models with AgTech AI and reinforcement learning is the cosmetics company, Sephora. So over a six year period, the first two years, all they did was just data gathering and collecting customer data conversations. They went from 580 million to 3 billion in three years. That's a lot. That's very quick. It's also, it's

Host: Paul Barnhurst (23:46):

Not a bad growth rate.

Guest: John Thomas  (23:47):

No, it's also you kind of have to have some domain knowledge. This is a product where the customer has an idea of what they want to buy, but not exactly. It's the same thing with Netflix tonight. I like to watch a thriller that narrows it down a little bit, but can you be more specific? So it's that type of recommendation to opportunities that exist where the customer has an idea but has some ambiguity and uncertainty and through a trial and error method, that's how Sephora developed their models for a couple years, and then in the third year they started deploying the machine learning products all over the place. So there are some large companies that they refuse to deploy any machine learning or ai, all they're doing is collecting data, doing some feature engineering, do some encoding, doing some AB testing on their customers or their client base, and then they have deadline date where they just unleash the kraken, so to speak.

Host: Paul Barnhurst (24:44):

I like it.

Co-Host: Glenn Hopper (24:45):

And with that, when you think about what AI is doing, whether it's predicting, clustering, the different levels where it can make its estimations, its guesses, I know, and you mentioned Wealthfront before, but I know you're currently working on an advanced AI model to learn how portfolios behave in different market conditions. I wanted to bring this up because you mentioned the correction and coming up with those. Now I know black swans are going to happen, can't really model those. I mean you can stress test around it, but creating those realistic, what if scenarios, talk about that project and how you see it being used and how's it coming?

Guest: John Thomas  (25:24):

Well, currently we're in the building and validation phase. We're really focusing on our cognitive clarity approach and what really matters. We're kind of going, does this actually improve risk measurement or do we have shiny object syndrome? And kind of goes well with my book in terms of identifying cognitive biases that exists when certain people are really focusing on things that are not feasible and what it does, basically, it's an AI model that learns how portfolios behave in different market conditions, not just normal markets, but also extreme scenarios. And it's a non-parametric narrative model for portfolio loss distributions that support value at risk. And it's trained on genuinely out of sample stress scenarios. It takes some time, aside from it, complexity, the interpretation of the results requires a debate within the company itself. What generative artificial intelligence did to me about 10 years ago was it affected my understanding of numerical computation in statistics. I used to use linear and nonlinear models with value at risk quite easily, but this is a matter of identifying known unknowns and what narrative deep learning models can do on a numerical basis, not text basis with chat DPT and not image-based was to provide out sample scenarios that you may not have actually think of and how it can actually impact your portfolio. And that's where the interpretation kind of comes in because after all, I think this year and last year we've had some unexpected events in the news.

(26:59):

So this narrative approach of using non-parametric models have been quite powerful. It helps not just also with the portfolio, but also it helps with identifying certain types of trading risks that are evolved where it can actually contribute generatively to the risk model. Are

Co-Host: Glenn Hopper (27:17):

You testing on sort of portfolios that out there or are these all you're building your own portfolios to see if you've got the hedging within it to offset those shock events? Or what are you using for testing?

Guest: John Thomas  (27:30):

So there's a whole feature engineering process of taking all our different types of asset classes, mostly if it is credit spreads, but we also use foreign exchange rates as well. And we use that in terms of its portfolio loss distributions historically and doing out samples from in a non-parametric way, which means that there's less bounds that are involved, assuming that's something that is not expected, that comes out as an event. There's quite a number of use cases for that this year and last year given what's happening in the news. So it also helps with planning, which is the most important thing where, okay, what happens if, I don't know, war breaks out, what if a company who's very big in the financial services industry all of a sudden declares bankruptcy? So there's that. It's basically unknown unknowns. And I know that this is kind of like, well, you can't really worry about everything. Well, not exactly. You need to know in terms of planning purposes for your portfolio loss distributions, if there's something you haven't thought of, if it can actually impact your portfolio and what you can do about it. It gives also opportunities to buy certain instruments on the credit side to offset any losses,

Host: Paul Barnhurst (28:42):

Unknown unknowns and trying to help offset losses, trying to minimise the potential losses.

Guest: John Thomas  (28:49):

Yes. Yes. And there is a large problem within the financial services degree in the sense that there's just too many people who, they're very structured with their engineering backgrounds and they're very structured with their accounting finance backgrounds and they're not have a good understanding of how generic AI works. So to kind of go back to unsupervised learning, supervised learning, reinforcement learning, where does narrative fit into that? So one way to kind of fit it in between describing predicting or recommending is unsupervised learning is what happened. And then predictive analytics with supervised learning is what will happen. And then recommendation engines with reinforcement learning is what should happen through a trial and error process. Narrative asks the question, what type of data should we create? And to me, this is a perfect opportunity for considering scenarios that you may not have thought of, which is a possibility to be concerned about with your risk management.

Host: Paul Barnhurst (29:51):

Got it. That's helpful. Appreciate that. We're going to move to the personal questions here in one minute, our fund 25 questions, but I want to ask one more. If you could give one kind of final piece of advice just for finance professionals to be better prepared to use augmented intelligence or be prepared for this whole AI revolution, what would be the biggest advice you'd give people?

Guest: John Thomas  (30:16):

Ask better questions. That's it. Interrogating models is more important than building the best models in this augmented and intelligence revolution this century. When someone shows you an AI powered trading strategy, your first instance is how does it work? You should really be asking what assumptions it has in making this happen and assumptions that I cannot see because here's the reality in the 20 years in this field, it's the pattern that I keep seeing over and over again. It's not really the technology that fails, it's that people fail to ask the right questions before they deploy the technology. They confuse a compelling back test with a production ready system. The mistaken precision for accuracy, and I leave it as the questions you ask will always matter more than the model you build. I

Host: Paul Barnhurst (31:11):

Really like that the questions you ask will always matter more than you model the build. So Glenn, I don't care about that model you built.

Co-Host: Glenn Hopper (31:17):

Yeah, well what George Fox say, all models are wrong anyway, right?

Guest: John Thomas  (31:23):

All models are wrong, but some are useful because all of them are estimates, so they always have heirs. So this kind of contradicts people's engineering backgrounds or math backgrounds where there is no tolerance of errors. So if you're bringing somebody with a background who has no tolerance of errors and you want them to estimate

(31:41):

They're trying to get to a hundred percent, and if they get to 98%, they get upset that 2% are made up of errors. So that doesn't really work. I mean it's a probabilistic mindset and not a deterministic mindset, and it requires some training and coaching, which we provide at the Global Institute of Data Science. It's a lateral shift for a lot of people. It's kind of like moving from a journalist to someone who writes a fiction novel that's happened more than once in our lives. You've seen it and it's something that requires a lot of careful attention to our client base.

Host: Paul Barnhurst (32:15):

Got it. Well, I know we're coming up against time, so I'm going to have a little fun here. So we used AI to generate 25 questions based on the questions we asked you today. Your LinkedIn profile of the internet wants to come up with some kind of fun, personal, maybe a little quirky. We have a little fun with this questions. And so there's two options. You can be the human in the loop and pick a number between one and 2025, or we can go all AI and let the random number generator pick the number between the one and

Guest: John Thomas  (32:44):

20. All ai, random number generator.

Host: Paul Barnhurst (32:46):

Alright, forget that. Human in the loop overrated. Alright, here we go. 17. And I haven't read these. I have no idea what we're going to get. Okay. You used Fama French models heavily early in your career at index fund advisors for the finance nerds in the audience, is there a Fama French factor that you think gets criminally underappreciated?

Co-Host: Glenn Hopper (33:12):

Wow. Claude went super nerdy on this one.

Guest: John Thomas  (33:16):

That's great. So F French is, it was my baby 20 years ago. If you want to learn the stock market, that's the first model you want to learn about and it's very critical. Eugene Fama eventually won the Nobel Prize in economics for it. Kenneth French, I think he still teaches at Dartmouth who did all the computing work for him. And I would say that one area that's been ignored within Fama is momentum. So the factors are, there's three factors. It has to do with the size of the company, the style of the company, and then the market. I would say that the model in itself probably doesn't do a good job with non-US companies that have a diverse set of subsidiaries that actually also trade in different stock markets. So that'd be one area. It also ignores momentum. Is it used quite a bit? Yes, in quite a number of ways.

(34:10):

There's also some alternative models that have been helpful with Eugene Fama that really gave birth to various other models that exist. But overall it's the efficient market hypothesis is doing quite well. Let me give a little kickback to that question. Lemme take a couple steps back. Talk about AI feasibility, which is really important. So in the year 1974, we had about 5,000 stocks trading in the US stock market. Then in the year 2005, that 5,000 dropped to 3,800. And this is important for exploratory data analysis because obviously the population increased from 1974 to 2005 and the number of companies and the profits and things like that. So we're square in the circle here with less companies and more money and more people and more participants and more products built on products with the financial services industry. What has happened is that people have left, have been delisting from the stock market and going into private equity. So if you want to learn about Eugene Fama and also the Fama French model, you also want to consider that there's a sizable amount of companies who have exited the sample and the data sets that exist within Eugene Fama. And you have to also focus on private equity. So a good financial services company would have to provide public elicit securities that will use using Fama and not public elicit, but for something with private equity that has nothing to do with Fama French.

Host: Paul Barnhurst (35:36):

Thank you for that. I didn't know when we started today, we'd go deep on Fama French. I had no idea. Finally get to happen. There you go. You never know where AI is going to take us. Over to you, Glen.

Co-Host: Glenn Hopper (35:49):

Alright, so I do it a little bit differently. I figure since AI generated the questions, I let it go. Just pick one for us using a random number generator. I'm just letting AI generative ai pick one. You

Host: Paul Barnhurst (36:02):

Ignores us humans altogether,

Co-Host: Glenn Hopper (36:05):

Which my wife would say is what I do 24 7 anyway.

Host: Paul Barnhurst (36:09):

Well it has nothing to do with ai. A lot of it.

Co-Host: Glenn Hopper (36:12):

Yeah. Moving

Host: Paul Barnhurst (36:13):

On.

Co-Host: Glenn Hopper (36:16):

Alright, let's see. Alright, this is a weird one. I'm going to add a part to this. This is sort of, I dunno if this is meant to be a dig or something, but I'll read the question and I'm going to add my phrase to it. You founded GIS in El Segundo, California. Was that a deliberate choice? Do you just really like being near LAX or are you still looking for your wallet?

Guest: John Thomas  (36:43):

Sure. So El Segundo is close to the airport. Yes. It's also a kind of a semi-industrial area that's good to have offices. I'm a local next to El Sdo in Manhattan Beach. It's a 10 minute bicycle ride there. So Manhattan Beach is also a place, it's a nice place to live and I've been living here for decades, so it's nothing important to answer that question. Nothing big.

Co-Host: Glenn Hopper (37:07):

A strange question though. And I added the bit about the wallet because I thought that's the first thing I think

Guest: John Thomas  (37:13):

That could be hallucination. Yeah. You guys, any more questions?

Co-Host: Glenn Hopper (37:21):

I think we got it covered here, actually. I'll send the rest of these to you. I think Claude did a pretty good job with most of these, but John, we really appreciate you.

Host: Paul Barnhurst (37:28):

I'll send you over the 25, but that's it for today, just the two.

Guest: John Thomas  (37:30):

Okay.

Host: Paul Barnhurst (37:32):

Thank you so much for joining us, John. It was fun to chat. Enjoyed getting your perspective as somebody who spent much of their career at AI and gives a different perspective than so many that all of us are kind of new to this whole thing. I know Glenn's been here a long time, unfairly new to figuring this all out. So really appreciate you sharing some of your thoughts with us today.

Guest: John Thomas  (37:54):

You're welcome. Hopefully I've been able to provide some informative information for your audience and look forward to my book later this year. The Augmented Intelligence Revolution, and there's also going to be sections with glossaries and breakout sections and also diagrams, so it's a relatively quick read, but also a big book for help to provide some clarity into the space, which is drastically needed.

Host: Paul Barnhurst (38:17):

Awesome. Well, we're excited for it to come out. It'll be great to see you. And thanks again for joining us.

Guest: John Thomas  (38:23):

Thank you very much.

Host: Paul Barnhurst (38:24):

Thanks. Thanks for listening to the Future Finance Show. And thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice, and may your robot overlords be with you.

Thanks for listening to the Future Finance Show. And thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice, and may your robot overlords be with you.

Next
Next

How Finance Leaders Can Stop AI Failure and Adopt Augmented Intelligence with John Thomas