Menu

100% AI-Generated Code: Can You Code Like Boris?

100% AI-Generated Code Can You Code Like Boris

Analyst(s): Mitch Ashley
Publication Date: February 3, 2026

Boris Cherny, head of Claude Code at Anthropic, stated in a recent Forbes article that he generates 100% of his code using the AI tool he builds, shipping up to 27 pull requests daily with zero manual edits. Peer-reviewed independent research published in Science tells a very different story.

What is Covered in this Article:

  • Claims by Anthropic AI tool creator, reporting 100% AI-generated code
  • Independent peer-reviewed analysis of AI-generated code across 30 million GitHub Python commits
  • Comparison between vendor internal usage claims and observed developer outcomes
  • Implications for enterprise expectations, workforce planning, and AI adoption maturity

The News: Boris Cherny, head of Claude Code at Anthropic, was quoted in a late January 2026 Forbes article that 100% of his code for the past two months was generated by Claude Code and Opus 4.5, with no manual edits. Cherny reported shipping 22 pull requests one day and 27 the next, each entirely AI-written. This represents a tool creator using the product he builds. Cherny leads Claude Code development at Anthropic.

The claims contrast sharply with independent research published in the journal Science in January 2026. Analyzing over 30 million Python commits on GitHub by 160,000 developers, researchers found 29% of U.S. Python functions are AI-written, with productivity gains of 3.6% accruing exclusively to experienced developers.

100% AI-Generated Code: Can You Code Like Boris?

Analyst Take — Vendor Claims Grow While Independent Evidence Reveals Reality Gap: Vendor claims about AI code generation have escalated rapidly over the past nine months. Microsoft CEO Satya Nadella reported in April 2025 that about 30% of Microsoft’s code was generated by AI. Salesforce cited similar figures around one-third in mid-2025. Google has not disclosed internal adoption rates for Gemini Code Assist. An Anthropic spokesperson stated in January 2026 that company-wide AI-generated code ranges from 70-90%, with the Claude Code product itself at approximately 90%.

These represent vendor-internal use of their own tools. No vendor has disclosed customer adoption rates or independent customer studies on productivity metrics.

The claims reached their apex in late January 2026, when Cherny reported that 100% of the code was generated by AI with zero manual editing. At the 2026 World Economic Forum, Anthropic CEO Amodei predicted the industry may be six to twelve months away from AI handling most or all software engineering work end-to-end.

The Science Journal study examined Python functions on GitHub, providing the broadest empirical evidence available but not comprehensive coverage across all programming languages, private repositories, or enterprise codebases. The study found measurable productivity gains only among experienced developers, with no statistically significant benefit for early-career engineers.

Vendor claims suggest rapid acceleration, while independent evidence provides an external perspective. The gap between vendor claims and empirical evidence is not just quantitative. It is structural.

Can You Be Like Boris? The Evidence Says Not Yet

Boris Cherny’s AI-generated code achievement is real. It is also not a realistic measure for most developers and organizations. Cherny leads Claude Code development at Anthropic. He uses the tools he builds to develop the products he creates. His workflow represents the upper bound of what is possible under optimal conditions: an expert user with direct access to model development, intimate knowledge of tool capabilities and limitations, workflows designed from scratch around AI code generation, and governance frameworks built alongside the tool itself.

Organizations are betting on vendor statements and claims. The Futurum Research 2026 Software Lifecycle Engineering Decision Maker research found 53.9% of organizations expect “high” productivity impact from AI in software development, with another 36.4% expecting “medium” impact. The Science study measured actual productivity gains at 3.6%, accruing only to experienced developers. The gap between enterprise expectations (90% anticipating at least medium productivity gains) and empirical reality (3.6% actual gains limited to senior engineers) represents significant execution risk for organizations restructuring development workflows based on vendor claims rather than evidence.
Vendors must address the gap between tool creators and buyers by helping organizations set realistic expectations for how developers will use AI in daily production. Transparent answers here can build significant customer goodwill and calm many of the claims vendors make in the media.

Outlook For Vendors

The trajectory toward widespread AI code generation depends on closing three critical gaps between vendor demonstrations and customer outcomes: evidence, governance, and workforce transformation.

The timing of workforce transformation depends on the evidence that vendors have not provided. The Science study’s finding that AI benefits only experienced developers contradicts vendor narratives of democratization and suggests that traditional career ladders may persist longer than vendors predict. Organizations are challenged with how to restructure hiring when vendor claims and empirical evidence differ.

Enterprise adoption decisions over the next 12-18 months will either validate vendor timelines or expose execution gaps between vendor capabilities and customer realities. The platform that publishes comprehensive customer evidence across languages, experience levels, and repository types, rather than relying on tool creator experiences, will establish market credibility and define realistic adoption trajectories for the industry.

Vendors that continue to rely on insider demonstrations rather than customer evidence risk losing credibility with enterprise buyers planning large-scale workforce and workflow changes.

What to Watch:

  • Watch whether vendors publish customer AI-code generation rates and productivity metrics across languages and experience levels, not just internal usage claims.
  • Monitor studies examining AI code generation for Java, C++, JavaScript, and other enterprise languages to verify if the 29% rate and productivity gap hold.
  • Track software engineering job postings for changes in entry-level availability, as evidence shows early-career developers see no AI productivity gains.
  • Observe whether organizations measure productivity improvements across all developer levels or identify benefits concentrated among senior engineers.
  • Watch whether vendors compete on customer success evidence rather than feature parity or internal productivity demonstrations.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

1H 2026 Software Lifecycle Engineering Decision Maker Survey Report (subscribers)

Karpathy’s Thread Signals AI-Driven Development Breakpoint

AgentOps: AI Agents Take Command of Workflow Automation

FuturumWatch Agentic AI Needs the Agentic AI Foundation

Author Information

Mitch Ashley

Mitch Ashley is VP and Practice Lead of Software Lifecycle Engineering for The Futurum Group. Mitch has over 30+ years of experience as an entrepreneur, industry analyst, product development, and IT leader, with expertise in software engineering, cybersecurity, DevOps, DevSecOps, cloud, and AI. As an entrepreneur, CTO, CIO, and head of engineering, Mitch led the creation of award-winning cybersecurity products utilized in the private and public sectors, including the U.S. Department of Defense and all military branches. Mitch also led managed PKI services for broadband, Wi-Fi, IoT, energy management and 5G industries, product certification test labs, an online SaaS (93m transactions annually), and the development of video-on-demand and Internet cable services, and a national broadband network.

Mitch shares his experiences as an analyst, keynote and conference speaker, panelist, host, moderator, and expert interviewer discussing CIO/CTO leadership, product and software development, DevOps, DevSecOps, containerization, container orchestration, AI/ML/GenAI, platform engineering, SRE, and cybersecurity. He publishes his research on futurumgroup.com and TechstrongResearch.com/resources. He hosts multiple award-winning video and podcast series, including DevOps Unbound, CISO Talk, and Techstrong Gang.

Related Insights
Dynatrace Perform 2026 Is Observability The New Agent OS
February 2, 2026

Dynatrace Perform 2026: Is Observability The New Agent OS?

Mitch Ashley, VP and Practice Lead at Futurum, shares insights on Dynatrace Perform 2026, examining how Dynatrace Intelligence and domain-specific agents signal the emergence of observability-led agent control planes....
SUSE Assists Customers With Digital Sovereignty Self-Assessment Framework
January 30, 2026

SUSE Assists Customers With Digital Sovereignty Self-Assessment Framework

Mitch Ashley, VP and Practice Lead at Futurum, examines SUSE’s Cloud Sovereignty Framework Self-Assessment and what it signals about digital sovereignty shifting from policy intent to measurable, operational execution....
Harness Incident Agent Is DevOps Now The AI Engineers of Software Delivery
January 22, 2026

Harness Incident Agent: Is DevOps Now The AI Engineers of Software Delivery?

Mitch Ashley, VP & Practice Lead, Software Lifecycle Engineering at Futurum, analyzes Harness's introduction of the Human-Aware Change Agent and what it signals about AI agents emerging across software delivery,...
GitLab’s Salvo in the Agent Control Plane Race
January 16, 2026

GitLab’s Salvo in the Agent Control Plane Race

Mitch Ashley, VP and Practice Lead, Software Lifecycle Delivery at Futurum, analyzes how GitLab’s GA Duo Agent Platform positions the DevSecOps platform as the place where agent-driven delivery is controlled,...
Dynatrace Brings Feature Management Into the Observability Control Plane
January 15, 2026

Dynatrace Brings Feature Management Into the Observability Control Plane

Mitch Ashley, VP and Practice Lead for Software Lifecycle Engineering at Futurum, analyzes how Dynatrace’s move to native feature management inside observability enables agent-driven delivery, tighter release control, and runtime...
Can Red Hat and NVIDIA Remove the Friction Slowing AI Deployments
January 14, 2026

Can Red Hat and NVIDIA Remove the Friction Slowing AI Deployments?

Mitch Ashley, VP and Practice Lead for Software Lifecycle Engineering at Futurum, analyzes Red Hat and NVIDIA’s expanded collaboration around the Rubin platform and RHEL for NVIDIA, examining how Day...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.