anton maximov

Logo

30 April 2026

AI adoption playbook - the org

Most conversations I have about AI adoption are anxious ones, and the anxiety is not unfounded. The companies that seem to have figured it out pull away faster than the rest can think, and FOMO crowds out judgement at exactly the moment it matters most. What follows is meant to cut through that: tangible things I have done over the past year at a fintech startup and on personal projects, offered less as a playbook than as a record of how I have been thinking and operating.

Direction of travel

It is useful to make a top-down, exec-level commitment to adopting and mastering the tools and the resulting changes in the way the company works. Top-down clears blockers, aligns people, and gives permission; but given the speed of change and adoption, the org needs the grassroots energy to deliver.

It should be ambitious and even if one cannot commit to specific headline-grabbing metrics right away, it should be an org-wide push. As with any top-down initiative, it should be repeated regularly and woven through all communication mediums.

North Star - if I were to call out one, it is the new way of working that is already here, and it is here to stay. The AI is the co-worker/co-pilot that you reach out to first. We are switching to agent-first for knowledge work. Yes, it is ambiguous, but the details will be company-specific.

I believe that the above will play into your own product features as you build the intuition for how the work changes, and your customers’ way of working also evolves.

Ownership

Directly Responsible Individual - name an accountable executive to drive this (C* depending on your stage and structure). Make it their quarterly/yearly goal, tie metrics to it. This creates a single place for decision-making; otherwise it might be your Security or IT or even Legal & Compliance wagging the dog.

Enablement team - initially the enablement work lands on the Platform/IT teams (evaluating the tools, connecting them to the ecosystem, supporting them), leaning on the grass-roots energy, individual champions, with executive support. At some level of scale it might be prudent to invest in the internal enablement team, even if it is just a single individual. The platform will become an internal product as the tools and processes accumulate.

Role of the platform - what’s important to note, platform at its best is not for gatekeeping, but an enabler - the concentrated investment and clearing place for ideas. The speed of change requires innovation and experimentation happening around the company, not in a centralized place. So the platform should enable that and otherwise get out of the way.

Churn - given how frothy the space is, there is a perceived danger in continuously chasing the latest innovation. It might become a full time job in itself. In my experience (given responsible adults and a startup environment) I have seen the opposite - people are so focused on daily work that they have no time to experiment. YMMV, and this is where dedicated allocation to the enablement team can help to ensure experimentation does happen in a responsible manner.

Clarity

Keep removing friction - what can we use, how, what data is appropriate to feed into the tools, what are the budgets, which accounts (personal vs. corporate), what is off-limits, can we point these tools at x/y/z - all these should be continuously clarified, updated, and broadcast. There should be a place people can go to and ask these questions and get an immediate answer (most likely, a dedicated Slack channel). Another useful tool I found is the Thoughtworks Radar format - arranging tools and practices into Assess/Trial/Adopt rubrics and publishing it regularly.

Eventually all of the above gets pushed into the tools and the platform so that the questions do not even arise.

Culture - in smaller companies it is easier to lean on the culture of openness, where one tries things out in the open and invites help early. It helps to broadcast that attitude constantly and model a curiosity-first approach. Clarify the high-level guardrails continuously, and then with visible excitement, not freaking out with the compliance hat on. Let me find a way to make it work vs. immediately jumping into risk mitigation.

Budget

Runaway costs - I established some high-level limits in terms of spend at the model providers, and then monitored to see who bumped into the limits and why. Then I raised the limits as needed. The key is to have enough observability and have hard limits against accidental runaway costs.

How much - with some data one can start budgeting. It helps to have a finance partner that can create a separate AI category to track and hand-wave some allocation for it. For reasoning through allocation it helps to compare the AI expense with the engineering salaries (e.g. in early 2025 we started with GitHub Copilot with $20/eng/mth, moved to Cursor $60/eng/mth and then went into Claude Code at $1000 and more a month per engineer as of late 2025).

Spend as a signal - I want to know what the spend per person is, but I do not hold it against people. I treat it as a signal that invites a deeper conversation at best. These are professionals with accountability, and we are making a deliberate investment. When it comes to efficiency, I lean on the paraphrased engineering maxims, “premature optimization is the root of all evil” and “make it work, then make it fast.” There are many tools and techniques out there when the time comes.

Adoption

The problem with scaling adoption is not unique, but the pace is uniquely high. Therefore discovery and adoption need to move faster. It does not help that the tools and patterns also change every few months. Recognizing this leads to more deliberate investment in crowd-sourced discovery with continuous investment into lowering the barrier for entry.

Carrot vs stick - having always worked in smaller, super-senior, high-agency teams I reach for the carrot a lot more - building excitement, fostering competition, lowering the barrier for adoption, reframing the job responsibilities, supporting experimentation, and working through initial setbacks in the open. The mandate comes through tool use (once proven), in performance management, in resource allocation - in other words, through structural pressure.

Slack time - the dream is that your employees will spend nights and weekends outside of working hours to get up to speed on the new way of working. This is naive at best. Your job as a leader is to continuously lower the barrier for entry. It is important to create slack, otherwise people are so maxed-out on day to day tasks that they cannot invest any time in tool adoption. It could be as easy as managers saying, “it’s ok to take longer on the task if you are trying to experiment”; or take half a day for research - whatever form it takes.

Information radiators - the obvious menu of options includes recorded lunch and learns, weekly showcases, workshops, dedicated slack channels that build excitement with news and showcases but also immediately answer questions.

Transient tiger teams - the pattern I have not tried yet - an enablement team that parachutes in and teaches a team how to AI. I think it makes sense at a larger scale.

Hackathons - run those regularly (e.g. twice a year, 2 days at a time). Execs should serve as judges to lend it legitimacy. The whole company should participate so that people can form cross-functional teams across disciplines. Do not overthink it - start with self-forming teams and let them decide on ideas. In the future you can invest in pre-seeding the ideas, but early on let people go wild. It is a great way to break up the startup grind.

Guilds - depending on your company culture it might be an effective tool to capture those who are passionate, but without time allocation and a mandate and avenues to share, they might not be effective.

Champions - spot them and elevate them. Anecdotally, it is the early skeptics who become the most effective and bring others along. Even better if they are seasoned veterans others look up to. When they are the ones championing AI adoption, it carries a lot more weight.

Mindset

It is important to reflect how we think and talk about AI adoption. Not the breathless elation, nor the ridicule, but finding a thoughtful middle ground.

Embracing change and mistakes - everyone needs to become more comfortable with being a novice, with a higher amount of change, with challenging how we work, and ultimately with mistakes. One will feel like a novice again, but the immediate positive results should create excitement. Acknowledge it. Model this behavior publicly and stress lessons learned (e.g. runaway engineering implementation due to lack of holistic judgement or deleted local repo are both memetic).

“AI psychosis” - this is not a clinical diagnosis, but a reflection of the fact that the work of judgement and evaluation coupled with context switching across agents is a lot more demanding and mentally draining. The rush of building, the gratification of short feedback loops, competitive FOMO, unused agent tokens burning a hole in your pocket, and the coding agent just a phone tap away - all of this keeps people up late, disrupts sleep, and drains them a lot faster. Keep an eye out for it, consider the fact that the work has gotten more intense, and notice early signs of burnout. Then you deal with it early to make sure you do not lose an employee - people managers should be aware of the risk and should have a toolbox of HR interventions available.

Control for slop - elevate the importance of judgement and human agency. If one lets slop through in whatever form, they are accountable for it, not the LLM. I do not need people to credit the LLM in their work, since I expect them to use LLM routinely, but I do want them to be accountable for the result - its correctness, thoroughness, succinctness, and fit for purpose. It is their responsibility, not the LLM’s.

People

This is the operational aspect that reflects the need to thread the AI adoption through the org’s processes.

Recruiting - use the AI adoption as a differentiation and marketing tool. People should want to join the AI-forward team, and you need to articulate your approach and positioning in the job descriptions and other public materials.

Interview evaluation - for any job family a candidate should demonstrate how they use the AI tools. Ideally, your prompt should make them build something that they can bring to the interview and walk you through how they built it. Find other ways to test it.

Onboarding - for the org at large and individual teams, make sure people are set up with the tools and practices from day one. It might seem obvious, but onboarding well is hard and things fall through the cracks all the time, especially at startups. The process requires continuous upkeep as the company grows rapidly.

Performance management - AI adoption and its effective use should become a rubric in performance management evaluations. The organization also needs to hold leaders accountable for AI adoption for their teams.

Metrics

Adoption is a good leading indicator, token leaderboards are useful and invite conversations, $$ spent is a loud metric - all of those should be tracked over time to notice trends. Treat them as the first signals you establish.

The rest should ultimately be business-specific - we are not doing AI for the sake of AI, but your business/department/team should already have metrics / KPIs before the adoption. This is your baseline that should be compared to what you got with the AI adoption. Use that to measure the impact. Do not worry about ROI upfront; it will be a useless exercise, given the emerging and compounding nature of the AI productivity impact.

The above should also account for the secondary impact of the human work moving up the value chain with agents taking over the routine. Developer Experience surveys and Engagement surveys do help here.

External narrative

The board, the potential investors, the prospective clients will ask, so it would be prudent to have a narrative prepared. At the very least one should be able to say, “we are on top of it, we are riding the wave, we are not in denial.” And of course depending on your positioning you can emphasize responsible adoption, balance of risk and speed, etc, etc.

I would at least highlight adoption metrics, point out a few sanitized use cases, mention AI in product, and outline the structure in place (accountable executive, budgets, metrics, etc).

Risk and compliance

Doing GRC (Governance, Risk, and Compliance) well is a competitive advantage for a fintech. You have to build this muscle and exercise it constantly. Of course, it should be appropriate for the company stage and risk you are willing to take.

Where this crops up immediately is in the prospective clients’ questionnaires, RFPs, partnerships, and various audits. Do not let yourself be caught off-guard and resort to hand-waving.

For a startup you can answer ~90% of inbound questionnaires with: a one-page AI policy, an employee acceptable use policy, a subprocessor list with AI vendors flagged, a model/use-case inventory, DPAs with no-training clauses on file, and a paragraph in your security overview / trust portal that says all this in plain English.

Maintaining this over time is a challenge, of course. Hook into your GRC platform, involve your IT (your device management and identity provider) and employ your vendor management process and software. Make your compliance department accountable for the above - someone has to own it.

It does not mean massive IT and platform build-out upfront that snarls adoption, but you should try and push controls and policies into the platform over time to let them be enforced through software, not paperwork.

Part 2

The technology part is a bit more tactical, so I will leave it for the follow-up post.