Back to all articles
Vibe CodingAIDevelopmentFuture of Coding

What Is Vibe Coding? The New Way to Build Software

Vibe coding means describing what you want and letting AI figure out how. Here's where the term came from, what research says about it, and whether it actually works.

What Is Vibe Coding? The New Way to Build Software

In February 2025, Andrej Karpathy—former Tesla AI director, OpenAI founding member, and one of the most respected voices in machine learning—posted something on X that would name a movement already underway.

He called it vibe coding.

Within months: 156,000 people joined r/vibecoding, putting it in the top 1% of Reddit communities. 25% of Y Combinator's Winter 2025 batch reported having 95% AI-generated codebases. Microsoft and the University of Michigan published academic research on it. Cambridge researchers wrote papers analyzing it as a transformation in knowledge work.

This isn't a meme. It's how software is being built now.


What Vibe Coding Actually Means

Here's Karpathy's original definition from that February 2, 2025 post:

"Fully give in to the vibes, embrace exponentials, and forget that the code even exists."

He described his new workflow:

  • Accept all AI-suggested changes without reading them line by line
  • When errors happen, copy-paste them back to the AI
  • Let the code grow beyond what you can fully comprehend
  • Trust the process

His conclusion: "It's not really coding anymore — I just see stuff, say stuff, run stuff, and copy stuff."

The essence of vibe coding:

Traditional CodingVibe Coding
Write code yourselfDescribe what you want
Understand every lineTrust the output
Debug by reading codeDebug by describing symptoms
Fix syntax errorsIterate through conversation
Skill = implementationSkill = specification

You describe what you want in natural language. AI writes the code. You don't need to understand every line. You iterate by describing problems and desired outcomes, not by fixing semicolons.

The "vibe" is trusting the process while maintaining enough oversight to catch when things go wrong.


The Origins: From Copilot to Vibes

Vibe coding didn't appear overnight. It emerged from a rapid evolution in AI capabilities:

2021: GitHub Copilot AI starts suggesting lines of code as you type. Developers get used to accepting suggestions. The first taste of AI-assisted coding.

2023: ChatGPT and Code AI can now write entire functions on request. Copy-paste workflows begin. People start asking "why am I writing this myself?"

2024: Claude, GPT-4, and Complex Tasks AI handles multi-file changes, understands project context, explains code, debugs errors. The capability gap between human and AI implementation shrinks dramatically.

2025: Agentic Tools Cursor, Bolt, Windsurf, Devin—tools that don't just suggest code but build entire features autonomously. AI plans, executes, and iterates. The human becomes director, not typist.

February 2025: Karpathy Names It The tweet didn't invent vibe coding. It named what thousands were already doing. The term stuck because it captured something real: a fundamentally different relationship with code.

Why it emerged:

  • AI got good enough to handle complete tasks
  • Context windows expanded from thousands to millions of tokens (AI can see entire projects)
  • Agent frameworks enabled multi-step, autonomous execution
  • Non-technical people realized they could build real software

What Academic Research Says About Vibe Coding

This isn't just practitioner hype. Serious researchers are studying it.

The Microsoft/University of Michigan Study (September 2025)

"Good Vibrations: Perceptions, Behaviors, and Changing Trust in AI-Assisted Programming" analyzed:

  • 190,000+ words from interviews and social media posts
  • 11 in-depth interviews with vibe coding practitioners
  • 88 Reddit and LinkedIn posts

Key findings:

The study found that vibe coding exists on a trust spectrum. At one end: AI assists while human leads. At the other end: AI does while human observes. Where you sit on that spectrum depends on earned trust through successful iterations.

Pain points identified:

  • Specification difficulty: Translating ideas into prompts that work
  • Reliability concerns: AI makes confident mistakes
  • Debugging challenges: When vibes fail, traditional skills are needed
  • Latency frustration: Waiting for AI responses breaks flow

The study's central insight: "Trust is the regulating mechanism." Practitioners build or lose trust based on results, and adjust their approach accordingly.

The Cambridge Study (June 2025)

Cambridge researchers introduced the concept of "material disengagement"—the programmer deliberately stepping back from working directly in code.

Their framing: AI becomes a "producer-mediator" of the code substrate. The programmer's relationship with their creation becomes indirect, mediated through conversation rather than direct manipulation.

This was the first academic paper to frame vibe coding as a fundamental transformation in knowledge work, not just a new tool.

The ACM Grey Literature Review (2025)

"Vibe Coding in Practice" analyzed practitioner experiences across blogs, podcasts, and social media, validated against a "quasi-gold standard" of recognized experts including Karpathy and Simon Willison.

Their characterization: vibe coding is an intuition-driven, trial-and-error development style where the traditional edit-compile-run loop is replaced by describe-generate-evaluate.


The Vibe Coding Community

The numbers tell a story:

MetricValue
r/vibecoding members156,000+
Reddit rankingTop 1% of all communities
Y Combinator W25 batch with 95% AI codebases25%
Growth rateExponential since Feb 2025

Garry Tan, Y Combinator's CEO, has called this "the dominant way to code" for a generation of new founders.

Who's vibe coding:

  • Technical founders: Ship products without hiring engineering teams
  • Product managers: Build working prototypes without developer dependencies
  • Designers: Implement their own designs directly
  • "Citizen developers": Non-technical people building real, functioning software
  • Experienced developers: Offload tedious implementation to focus on architecture

Real examples:

Kevin Roose, technology columnist at The New York Times, has written about building personal apps he calls "Software for One"—custom tools that solve specific problems in his life, built entirely through vibe coding.

Y Combinator startups are launching with solo founders and AI-generated codebases. Indie hackers are shipping products in days instead of months.

The barrier between "idea person" and "builder" is dissolving.


Tools Built for Vibe Coding

A new category of tools has emerged specifically for this workflow:

ToolApproach
CursorAI-native code editor with multi-file editing and Composer mode
Bolt.newDescribe and get a running app in seconds
Replit AgentConversational app building in the browser
LovableVisual-first AI development
Windsurf"Cascade" technology for flow-based coding
Claude CodeCommand-line agent for complex development tasks
DevinAutonomous AI software engineer

The common pattern:

  1. Natural language input (describe what you want)
  2. AI plans the approach
  3. AI executes across multiple files
  4. Human reviews results
  5. Iteration through continued conversation

These aren't chatbots that generate code snippets. They're agents that build complete features, handle errors, and iterate until the task is done.


The Criticism and Concerns

Not everyone is enthusiastic.

Andrew Ng's Objection

Andrew Ng, co-founder of Google Brain and one of AI's most prominent figures, called the term "misleading." His concerns:

  • It might encourage genuinely sloppy practices
  • It undersells the skill still required
  • "Vibes" sounds unserious for serious work

The "Vibe Coding Hangover"

Fast Company's September 2025 analysis documented how early enthusiasm is meeting reality:

  • Code that works but nobody understands
  • Technical debt accumulating invisibly
  • Debugging AI-generated code is genuinely hard
  • The gap between "working prototype" and "production system" remains wide

Security Research (Veracode 2025)

The numbers are concerning:

  • 45% of AI-generated code fails security tests
  • Java code: 72% failure rate
  • AI doesn't understand your specific threat model
  • Confident-sounding code with subtle vulnerabilities

The METR Study (July 2025)

Perhaps most surprising: experienced open-source developers were 19% slower when using AI assistance on familiar codebases.

The study revealed a gap between perceived benefit and actual benefit. AI helps with some tasks and hurts with others. The blanket assumption that "AI makes everything faster" doesn't hold.


Where Vibe Coding Excels

Despite the criticism, vibe coding genuinely works for certain use cases:

Good fit:

  • Prototypes and MVPs
  • Personal projects and internal tools
  • Well-understood problem domains
  • Tasks with clear, verifiable success criteria
  • Greenfield development (starting fresh)
  • Standard CRUD applications
  • UI implementation from designs

Less good fit:

  • Safety-critical systems
  • Complex legacy codebases
  • Highly regulated domains (healthcare, finance)
  • Performance-critical applications
  • Security-sensitive code
  • Novel algorithms
  • Systems requiring deep domain expertise

The pattern: Vibe coding excels at "getting something working." It's less reliable for "making it production-ready at scale." The gap between demo and deployment remains significant.

The best results come from combining vibe coding speed with human review of what matters: security, business logic, edge cases.


How to Vibe Code Effectively

Research and practitioner experience suggest these approaches:

Iterative Verification

Don't blindly trust. But don't read every line either. Find the middle ground:

  • Test functionality as you go
  • Verify business logic is correct
  • Check edge cases
  • Review security-relevant code carefully
  • Trust but verify incrementally

Clear Specification

Better prompts produce better results:

VagueSpecific
"Build a dashboard""Build a dashboard showing daily sales totals, top 5 products by revenue, and a line chart of sales over the past 30 days"
"Add authentication""Add email/password login with password reset via email, store sessions in cookies, redirect to /dashboard after login"
"Make it look better""Use a clean design with lots of whitespace, system fonts, and a blue primary color (#3B82F6)"

Know When to Intervene

Some tasks need manual attention:

  • Security implementations
  • Complex business logic
  • Performance optimization
  • Integration with unfamiliar systems
  • Anything involving money or sensitive data

Maintain Debugging Skills

When vibes fail—and they will—you need fallback skills:

  • Read error messages carefully
  • Understand basic code flow
  • Know how to isolate problems
  • Be able to verify AI explanations

Review What Matters

Not all code is equally important. Focus review time on:

  • Authentication and authorization
  • Data validation
  • Financial calculations
  • External API integrations
  • Database operations
  • User input handling

Where This Is Going

The trajectory is clear:

Agent capabilities are improving rapidly. What requires human intervention today may be fully automated next year. Context windows keep growing. Models keep getting smarter.

More tasks are becoming "vibeable." Complex multi-step operations that required human oversight are increasingly handled autonomously.

Tools are getting better at self-correction. Modern agents test their own code, catch errors, and iterate without human prompting.

But human judgment remains essential. Knowing what to build, evaluating whether it's correct, and making decisions about trade-offs—these remain human skills.

Garry Tan's Warning

The Y Combinator CEO has raised an important question:

"What happens when a 95% AI-generated codebase has 100 million users?"

We're in uncharted territory. The startups launching today with AI-generated code will face scaling challenges we don't fully understand. Debugging production issues in code you didn't write—that no human wrote—is genuinely novel.

The honest answer: we don't know yet.


The Bottom Line

Vibe coding is real. It's growing. Academic research validates it as a genuine paradigm shift in how software gets built.

It's also not magic. There are real limitations, real risks, and real failure modes. The criticism isn't just resistance to change—there are legitimate concerns about security, maintainability, and the gap between prototype and production.

The right approach:

  • Informed adoption, not blind faith
  • Trust earned through successful iterations
  • Human review where it matters
  • Maintained ability to intervene when vibes fail
  • Clear-eyed understanding of limitations

The skill isn't just prompting. It's knowing when to vibe and when to verify. When to trust the output and when to inspect it. When AI handles implementation and when you need to take the wheel.

For the right use cases, vibe coding genuinely works. It's not going away. The question isn't whether to engage with it, but how to do so effectively.


Try Vibe Coding in a Unified Environment

Orbit is built for vibe coders. Describe what you want to build, and AI agents handle the implementation. No configuration, no context switching, no friction between idea and working software.

Join the waitlist →


Sources & Further Reading

Original Source

Academic Research

Industry Coverage