Bull Series - Why the Best AI Adoption Strategy for Wall Street Starts with Logo Hunting
The last few weeks has brought a wave of AI announcements targeting financial services, with Anthropic launching Claude for Financial Services and OpenAI rolling out ChatGPT Agent with impressive performance benchmarks for investment banking use cases. This development heats up the competition, bringing foundational model platforms squarely into the competitive landscape alongside existing AI firms targeting financial services like Rogo, Hebbia, AlphaSense, and more.
As someone who spends significant time testing AI capabilities and their applicability in finance, I'm watching this development with both excitement and a bit of strategic concern. While these tools demonstrate remarkable technical sophistication, the real question isn't whether AI can handle complex financial workflows, but how we actually get Wall Street to adopt them.
From Guided Task Execution to Autonomous Agents
Earlier in the year, I experimented with ChatGPT’s o3 model for a DCF analysis (read about it here). While I was genuinely impressed by o3's reasoning skills and ability to complete the analysis end-to-end, I discovered something crucial about AI and complex workflow management.
During the experiment, I presented o3 with a basic Excel template that contained minimal assumptions and tried two approaches. First, I let it run free, which led to incorrect results due to o3 making methodological assumptions that didn’t align with what I wanted. Then, I explicitly instructed o3 to pause and ask clarifying questions when faced with ambiguity before proceeding. That changed everything. It began acting like a sharp junior analyst, proactively flagging uncertainties and seeking guidance instead of guessing. The result? Upon my request, it sent back an accurately completed analysis in an Excel file with values fully traceable and linked, along with a sensitivity table - all delivered in just over a minute.
This early hands-on experience with multi-task AI coordination gave me a preview of where the industry was heading. And now, with ChatGPT Agent and Claude for Financial Services officially out, we’re seeing the shift from guided task execution to autonomous agents. OpenAI’s agent can supposedly handle building LBOs and three-statement models, while Claude demonstrated the ability to synthesize market data and create traceable DCF models in minutes. While output consistency and accuracy will remain an ongoing challenge, these developments validated what I observed in my own testing about AI's readiness for professional-grade financial analysis and advancements in its technical capabilities.
But here’s the catch: impressive capabilities mean nothing if everyday practitioners don’t actually use them.
The Adoption Reality Check
The inconvenient truth about AI adoption in finance is that it's not just about technical capability. Equally, if not more important are trust, workflow integration, and cultural acceptance. And in my conversations with professionals across Wall Street, I've noticed a pattern: the AI tools generating the most buzz in press releases aren't necessarily the ones gaining traction in day-to-day workflows.
Why the disconnect? Because usage is built through trust, and trust is built through consistent small wins, not grand demonstrations. If a user’s first meaningful interaction with AI is to attempt to automate a large, complex workflow, it will often feel too ambitious or unfamiliar. It sets up a high barrier to entry: nontechnical professionals may feel they need to master new prompting techniques, understand the model’s limitations, or even restructure their workflow just to get started.
But when practitioners see AI reliably handling smaller, mundane tasks that they didn’t want to do themselves, it builds confidence. They start to think: ‘this tool really can help me. What else can it do? How far can it go?’ That’s when they begin experimenting with more sophisticated use cases.
Thus, the pathway to AI adoption on Wall Street doesn’t start with tools automating large, complex workflows. It starts with proving reliability on simpler tasks first.
Why Start Small: How Banking Actually Works
To understand why “simple wins first” matters, you have to understand how IB and PE professionals actually work vs. how AI companies think they work.
First, analysts don't want AI to do the interesting work. Financial modeling isn't grunt work to them - it's valuable, skill-building analysis they want to perform themselves. What they do want is AI to handle the time-wasting, repetitive tasks so they have more time and mental bandwidth for the strategic thinking that modeling requires.
Second, bankers rarely build models from scratch. When working on DCFs or LBOs, analysts leverage prior models from similar deals, making appropriate adjustments for current circumstances. Or they start with the client's existing operating model and build valuation functions around it. So tools that generate full models from zero aren’t solving a daily problem - they’re solving an edge case.
Third, the real value lies in enhancing existing workflows, not replacing them. Bankers and PE professionals have refined processes that work within their firm's systems, compliance requirements, and partner preferences. Tools that generate "new pitch decks" or "AI-built financial models" from scratch require adopting entirely new workflows - a much harder sell than tools that make current existing workflows and outputs better, faster, or less tedious.
This doesn’t mean sophisticated capabilities aren’t important. They absolutely are. AI-built models and pitch decks may eventually become the standard. But to get there, the first hurdle is building trust. And that starts with tools that plug into what analysts are already doing, not tools that ask them to work in entirely new ways.
The Trojan Horse Strategy
This is why the GTM strategy for AI adoption in finance needs to start with small, high-impact tools that are easy to use, clearly helpful, and can build immediate trust - then expand scope from there.
Plenty of AI firms are starting with due diligence automation and synthesis tools. Those are definitely valuable and helpful, no question there. But I’d argue that even simpler workflows that require even less of a learning curve may be the better wedge - things like logo hunting and spreadsheet sanitization.
What is logo hunting? If you've worked in banking or PE, you know the incredibly unglamorous but repetitive, time-consuming grunt work that goes into hunting down high-resolution company logos for pitch decks and memos. Spreadsheet sanitization is the same kind of grind. Ahead of every deal launch, analysts comb through massive amounts of Excel files, fixing fonts, aligning columns, deleting random notes, just to make them clean and presentable before uploading to a data room.
These aren’t glamorous problems, but they’re real. AI tools that handle these workflows probably won’t make headlines, but they will get used and make analysts’ lives meaningfully better. They don’t require advanced reasoning capabilities and that’s exactly the point - they solve everyday pain points that waste time but don’t build real skill. By delivering immediate, tangible value with little to no learning curve or workflow disruption, these simple tools will build trust and goodwill toward the AI platform. And that’s what drives repeat usage.
If some of these tools already exist within current AI platforms, there's an opportunity to position them more prominently. While sophisticated modeling capabilities are important and great for showcasing innovation, leading with short demos of AI quickly cleaning 20 Excel files or finding and formatting 30 logos will likely resonate even more with day-to-day practitioners.
Now imagine an analyst who’s successfully used these basic AI tools for weeks. When a more complex financial task comes up, they’ll be far more willing to spend the time to explore the platform’s advanced features and experiment with ways to integrate them into more sophisticated workflows. These simple, but small wins become a powerful entry point that helps build an active and recurring user base.
The Long Game
This trust-building pathway becomes even more powerful when you factor in what’s coming next. Tools like Claude for Financial Services and ChatGPT Agent are already capable of complex reasoning and multi-step execution. As these capabilities improve, the potential to reshape workflows is very real.
And the stakes are especially high on Wall Street. These firms are notoriously slow to adopt new tools. But once they do, they rarely switch: Change-resistant firms become incredibly sticky customers. That makes first-mover advantage more important than ever and why nailing the trust-building path matters so much. Win early, and you could lock in long-term relationships with some of the most valuable institutions in finance.
Finally, let’s not forget who really drives adoption: it’s the junior team members who use these tools day to day. While MDs and partners may make the final call, they’re increasingly asking their teams: “Which AI tools do we actually need?” Analysts and associates who gain value from AI on simple tasks are the most credible advocates for expanding usage across the firm and for its implementation across more sophisticated workflows.
The pathway to widespread AI adoption in finance runs through trust, and trust builds through consistent value delivery on everyday problems. Start with the mundane, prove reliability, then expand to the transformational.
The question isn't whether AI can build financial models. It's whether it can earn the right to try.
What simple but annoying tasks in your workflow would make great AI adoption catalysts? I'd love to hear about the unglamorous problems that could serve as gateways to more sophisticated AI usage.