AI tools are now part of the default dev workflow, but we keep our stack small and intentional. These four tools cover most of our work in 2026: Claude Code and Codex for heavy lifting and cross-checking, GitHub Copilot CLI for terminal-first tasks, and Lovable for fast prototypes.
The goal is simple: one tool to plan, one to verify, one to move fast in the terminal, and one to prototype UI and flows before we commit engineering time.
TL;DR (core stack)
- • Claude Code - plan-driven refactors, deep repo work, and architecture tasks.
- • Codex (OpenAI) - run tasks in a sandbox or locally, with clear execution logs.
- • OpenCode + Azure OpenAI - enterprise-compliant alternative with data residency controls.
- • GitHub Copilot CLI - quick help in the terminal: scripts, commands, and explainers.
- • Lovable - rapid prototypes and UI flows to validate ideas fast.
Claude Code: plan-driven refactors with context
Why we use it: When a task spans multiple modules, Claude Code is our go-to for mapping the codebase, proposing a plan, and executing safely. It is best when we need structure before speed.
Where it shines:
- – Understanding large repos without manually collecting context.
- – Refactors that need a multi-step plan and guardrails.
- – Architecture changes with explicit tradeoffs.
Sample prompt:
"Plan a migration from REST to GraphQL for the /orders domain. List files, risks, and tests. Then apply changes and summarize the diff."
Codex (OpenAI): execution loop with clear traces
Why we use it: Codex is excellent for task-oriented execution. We use it to build and test in a repeatable loop and to verify outputs from Claude Code.
Great for:
- – "Do X, run tests, and give me a patch" workflows.
- – Cross-checking complex changes independently.
- – Auditable steps and command logs for reviews.
Sample prompt:
"Add CSV import for users with validation, add tests, run them, and summarize failures before fixing."
OpenCode with Azure OpenAI: enterprise-grade flexibility
Why we use it: For enterprise clients with strict compliance requirements, we run
OpenCode connected to Azure OpenAI. This gives us the same Codex-style execution loop while keeping data within Azure's enterprise boundaries.
Key benefits:
- – Data stays in your Azure tenant with full compliance controls.
- – Works with GPT-4o, GPT-4.1, and other Azure-hosted models.
- – Open-source CLI with familiar terminal-first workflow.
- – Easy to integrate into existing Azure DevOps pipelines.
When we reach for OpenCode:
- – Client requires data residency or SOC 2 compliance.
- – Internal projects with sensitive codebases.
- – Teams already invested in Azure infrastructure.
Setup note:
Point OpenCode at your Azure OpenAI endpoint, set the deployment name, and you get the same agentic coding experience with enterprise guardrails.
Why we run Claude Code and Codex together
We often run the same task in both tools and compare the results. It is a fast way to catch blind spots: if both agree on structure and tests, we move faster; if they diverge, we dig in and tighten the requirements.
Our cross-check loop:
- 1. Claude Code proposes the plan and initial patch.
- 2. Codex re-implements or validates the change with tests.
- 3. We compare diffs, reconcile gaps, and re-run tests.
GitHub Copilot CLI: terminal-first acceleration
Why we use it: Copilot CLI is our fastest path to small tasks in the terminal. It helps generate one-off scripts, explain command output, and reduce context switching when we are already in a shell.
Everyday tasks:
- – Generate bash/zsh snippets safely and quickly.
- – Turn a goal into a command chain.
- – Explain logs or error output in plain language.
Mini prompts:
"Create a git command to list files changed in the last 3 commits and export to CSV."
"Explain this npm error and suggest the next command to try."
Lovable: rapid prototypes before engineering
Why we use it: Lovable lets us turn ideas into clickable prototypes fast. It is ideal for validating UI flows, landing pages, and early product concepts before we invest in full builds.
Prototype flow:
- 1. Define the user story and success criteria.
- 2. Build the flow in Lovable and collect feedback.
- 3. Translate the validated flow into engineering tickets.
How we choose the right tool for the job
| Scenario |
Best fit |
Why |
| Large refactor with multiple steps |
Claude Code |
Planning and deep repo context. |
| Independent verification of changes |
Codex |
Execution loop and audit trail. |
| Enterprise or compliance-sensitive work |
OpenCode + Azure OpenAI |
Data residency and SOC 2 compliance. |
| Quick terminal task or script |
Copilot CLI |
Fast command generation and explanations. |
| Early product or UI prototype |
Lovable |
High-speed prototyping and validation. |
Practical tips that saved us time
- • Always ask for a plan: It prevents surprises and improves review quality.
- • Cross-check big changes: Run the task in Claude Code and Codex, then compare.
- • Keep prompts tight: Clear constraints beat long narratives.
- • Prototype before you build: Lovable reduces rework by validating early.
- • Stay in the terminal: Copilot CLI removes context switches for small tasks.
Final take
We are not trying to use every tool. We use a small, reliable set that covers planning, verification, terminal speed, and prototyping. That balance keeps output high while still letting engineers stay in control.