AI headlines pile up quickly. This recap will land every few weeks to help you get caught up on the latest AI developments that actually matter, without the noise. Click on titles for the full articles. Enjoy!
1) AI × Wall Street
Banks triple AI use cases: Evident’s new Brief shows use cases tripled at top banks in H1’25, with the heaviest deployment activity concentrated in retail / private banking. Most announcements do not include ROI though, so performance can’t be inferred from public data yet. The push now shifts toward scaling genAI and “agentic” tools across others parts of banks.
Seth Klarman thinks AI is intern-tier: Baupost’s CEO calls AI “a capable assistant,” helpful for tabulating data, scanning 10-Ks, and quick visual ID work (e.g., logos), but is not a stock picker. He warns over-reliance on AI can dull one’s creativity and critical thinking, reinforcing that human judgment should anchor investment decisions.
Google Finance gets an AI makeover. Google is testing an in-page chatbot, advanced charting, and a live news feed in Finance so that so users can ask complex market questions without leaving the platform.
TL;DR - Adoption is clearly accelerating inside existing finance workflows, though public ROI remains largely undisclosed. Meanwhile, Google is closing the “AI escape hatch” by adding chatbot-style answers to Finance, giving users fewer reasons to jump out of its platforms and to other assistants for complex queries.
2) Platform Firm Moves That Matter
OpenAI’s GPT-5 Landed Roughly: GPT-5’s launch sparked pushback over errors, a “colder” tone, and restrictive usage limits. OpenAI responded by promising fixes, including warming the model’s personality, temporarily restoring GPT-4 access, and adding more customization controls - all while facing increasing competition and compute strain.
Claude Sonnet 4 now supports a 1M-token context (public beta): Anthropic’s API customers can now use Claude Sonnet 4 with a 1 million-token context window - the equivalent of multiple books, dozens of research papers, or a large codebase. This means users can load massive documents and keep asking questions across long conversations without the model losing context.
Oracle ↔ Google Cloud, Gemini models become available natively on OCI: Oracle clients can now switch on Gemini directly within Oracle Cloud, no extra setup required. That convenience makes Gemini the easier choice for enterprises already on Oracle, giving Google a distribution edge.
TL;DR – OpenAI stumbled with GPT-5’s rollout, facing user backlash and scrambling to patch tone and usability issues. Anthropic is leaning into differentiation with a 1M-token context, making Claude useful for book- or codebase-scale work. And Google scored a distribution win by making Gemini available natively through Oracle Cloud, lowering adoption barriers for enterprises already on OCI.
3) Risks & Regulation
Deepfake “CEO” scams are now material: AI-generated impersonations of executives have already cost companies more than $200M this year, with reported cases at Ferrari, Wiz, WPP, and others. Scams often use real-time fake video or voice calls to trick employees into wiring funds or handing over sensitive data.
U.S. “Financial AI Sandbox” bill introduced: Financial services firms are already experimenting with AI under existing compliance rules, but there’s no dedicated AI regulation in the industry yet. This bipartisan proposal would change that: it requires financial regulatory agencies to set up AI Innovation Labs: safe-harbor sandboxes where firms can test AI tools under regulator supervision without worrying about being penalized.
EU AI Act - first binding AI law goes live: As of August 2025, the EU’s new AI Act officially applies to providers of general-purpose AI models. Companies like OpenAI, Anthropic, and Google must now follow rules on transparency, safety, and accountability. This makes Europe the first region in the world with a comprehensive, legally binding AI law, setting a global precedent for how AI oversight may evolve elsewhere.
TL;DR - Corporate losses from deepfake scams show the immediate dangers of AI misuse, while a US bill proposes supervised financial AI sandboxes, and the EU has rolled out the world’s first binding AI law. Regulation is shifting from talk to action, with Europe setting the precedent and the US testing narrower approaches.
4) Culture & Society
Meta child-safety backlash: A Reuters investigation found Meta’s AI chatbots could engage in sexual role-play with minors despite internal warnings. That revelation triggered immediate probes from U.S. Senators and the Texas AG. The same report also flagged broader risks around bias in race, age, and celebrity depictions in Meta’s AI characters.
xAI’s Grok “Spicy” mode sparks FTC calls: Elon Musk’s xAI added a “Spicy” mode to its Grok-Imagine image tool, letting users generate NSFW content. Consumer groups warned it could enable deepfakes and non-consensual sexual imagery, and flagged that the only barrier stopping minors from accessing it is a single ,self-select age check pop-up. The groups have urged the FTC to investigate.
Vogue/Guess AI model backlash: Guess ran an ad in Vogue using AI-generated “models” instead of real people. The move triggered criticism that it undermines human models’ work and fuels unrealistic beauty standards, fueling a broader debate on AI replacing creative professionals.
These are AI-generated ads that appear in the August edition of Vogue magazine. | Seraphinne Vallora - Source: PetaPixel
TL;DR – AI is under fire in culture. Meta is facing probes over child-safety failures in its chatbots, consumer groups are urging the FTC to investigate xAI’s weak safeguards around NSFW image generation, and Vogue/Guess faced backlash for using AI “models” in ads.