01 Enterprise AI
Why most enterprise AI implementations fail at adoption, not capability
+

The AI capability gap closed faster than anyone predicted. The models are good enough. What hasn't caught up is the organizational layer — the workflows, incentives, and mental models that determine whether a tool gets used or quietly abandoned after the pilot quarter.

I've watched this pattern repeat across industrial and digital contexts. A team deploys a sophisticated AI system. The technology performs. Adoption stalls at 20%. The vendor gets blamed. But the real failure was upstream: no one redesigned the work itself to accommodate the new capability. The AI was installed on top of existing processes rather than integrated into them, and it made the old way feel slower without making the new way feel natural.

The companies that actually capture ROI from AI share a counterintuitive trait — they implement more slowly in the first 90 days. They spend that time doing workflow archaeology: mapping what people actually do versus what the org chart says they do, identifying where friction lives, and redesigning around the tool rather than alongside it. They treat adoption as a product problem, not a training problem.

The practical implication: your AI roadmap should have as many entries under "process redesign" as it does under "model deployment." Capability without adoption is infrastructure spend. Adoption without redesign is shelfware with a better interface.

The best proxy metric I've found for predicting AI adoption success: does the team that will use the tool have a representative in the implementation design process from day one?
02 Product Leadership
The industrial design discipline that digital product teams are missing
+

Industrial design is fundamentally the practice of solving a problem within constraints you cannot change — materials, physics, manufacturing tolerances, ergonomics. Every decision compounds. A handle that's 4mm too wide changes the grip, changes the fatigue profile, changes the return rate. There is no "ship and iterate" on a physical product that's already in a factory run.

Digital product teams operate in the opposite culture. The assumption of reversibility — "we can always fix it in the next sprint" — is both the greatest freedom and the most dangerous habit in software development. It produces teams that optimize for velocity and underweight the cost of design debt.

What I've imported from industrial design practice into digital product work is a forcing function I call constraint-forward design: before any feature goes into scoping, the team has to articulate the irreversible constraints — the data architecture decisions, the API contracts, the user mental models — that this feature will lock in. Not to slow down development, but to ensure the decision is made consciously rather than incidentally.

The discipline changes the conversation. Teams that practice it ship less frequently but accumulate far less architectural debt. They also tend to build things that are genuinely differentiated rather than incrementally similar to a competitor's last release — because constraints, properly understood, are a source of distinctive form, not just limitation.

Raymond Loewy's "MAYA" principle — Most Advanced Yet Acceptable — was an industrial design concept. It applies to enterprise software just as cleanly. Most digital product teams have no equivalent framework for knowing when a design is ahead of its users.
03 Strategy
Edge computing as a competitive moat for mid-market industrial firms
+

The hyperscaler narrative has convinced most mid-market leadership teams that cloud migration is the only viable path to AI readiness. This is strategically wrong for a specific class of industrial firm — and the ones that recognize it early will have a durable advantage their larger competitors will struggle to replicate.

For manufacturers, distributors, and industrial service firms operating with proprietary process data, the real moat isn't the algorithm — it's the data that's never left the building. Edge AI deployments — inference running at or near the point of production, without cloud round-trips — turn that proprietary data into a continuous feedback loop that improves operational performance in ways that are invisible to competitors and difficult to recreate even with equivalent capital.

The barriers are lower than most assume. A modern edge inference stack for a mid-market manufacturer can be deployed for a fraction of the cost of a cloud migration project, with a shorter integration timeline, and with a security posture that's more defensible for regulated or IP-sensitive environments. The missing ingredient is almost never budget or infrastructure — it's product leadership with enough technical range to translate between the operations team, the data team, and the C-suite without losing fidelity at each handoff.

The window for this advantage is 24–36 months. After that, the tooling becomes commoditized and the moat requires either scale or proprietary data that's already been generating for years. The firms moving now are building the latter.

I've observed this pattern most clearly in precision manufacturing and industrial asset management — sectors where the operational data is extraordinarily rich but has historically been used only for backward-looking reporting rather than real-time optimization.
04 PE / Portfolio
What private equity gets wrong about product-led growth in platform acquisitions
+

Product-led growth is one of the most misread frameworks in the PE playbook. The term gets applied to any company with a self-serve trial or a freemium tier. The actual discipline — building a product so well-fitted to a specific user workflow that acquisition, activation, and expansion happen through product use rather than sales motion — is rare and fragile in acquisition contexts.

The typical PE integration thesis assumes that adding sales infrastructure accelerates a product-led company. It usually does the opposite. PLG companies are calibrated against their natural distribution channel — the product itself. When you insert a sales team, you change the unit economics, the customer expectations, and the feedback loop that drives product improvement. You get short-term revenue acceleration and long-term product stagnation.

What actually works in PLG acquisitions is the inverse: invest in product velocity before adding sales capacity. Use the 90 days post-close to get precise about which user workflow the product is genuinely best in class at — usually one, not four — and eliminate everything that blurs that identity. Then build the go-to-market around that sharpened identity rather than deploying a generic enterprise sales motion over a product that was never designed to support it.

The firms I've seen execute this well share a common trait: they put a product leader — not a sales leader — in charge of the first 180 days of post-acquisition integration. The sequencing matters. Revenue follows product clarity; it rarely precedes it in PLG contexts.

This is the kind of assessment that a 90-day diagnostic engagement is designed to produce — not a market sizing exercise, but a precise account of where the product has authentic velocity and where it's been artificially supported by heroics.
05 Org Design
The VP of Product role is being split in two — and most orgs don't know it yet
+

The VP of Product title covers an increasingly wide range of actual work. In some organizations it means deep roadmap ownership and engineering partnership. In others it means market positioning, pricing strategy, and commercial alignment. The most demanding version means doing both — running the product org while sitting in revenue and board conversations where the product is being evaluated as a strategic asset rather than a feature backlog.

What I'm watching in high-growth enterprise companies is a structural split that hasn't been named yet. There's a VP of Product Execution — who owns the operating system of product: rituals, prioritization frameworks, team velocity, engineering relationship — and a VP of Product Strategy — who owns the product's relationship to the market, the investor narrative, and the competitive positioning. Both titles are still called "VP of Product."

The confusion creates two distinct hiring failures. Companies that need strategic product leadership hire for execution skills and get a product manager who was promoted. Companies that need operational discipline hire for strategic orientation and get a consultant who was given a team. Neither is wrong for their context — but the mismatch costs six months of organizational drag minimum.

The diagnostic question is simple: in your organization, does "product leadership" mean running teams or running conversations? The honest answer to that question tells you what kind of product leader you actually need — and whether the person you have is doing the job that exists or a different job you haven't acknowledged yet.

My own range covers both — I've run product teams of 12 and I've owned board-level product narrative in the same role. But most VPs are optimized for one, and most orgs don't know which one they need until after the hire.