Part 2 of 3: Build Something Series
The AI Building Revolution: What's Real and What's Hype
In Part 1, I talked about "building something" broadly - a software product, a service business, a local company, a creative venture. Maybe even something that isn't a business at all: starting a family, diving into volunteer work, running for local office, finally writing that book.
All of these are valid forms of building. All of them can emerge from the disruption of a career transition. And most of them don't require AI or technology at all.
But this part of the series focuses specifically on one slice: using AI tools to build software products. Why? Because it's what I know - it's my direct experience with ReApply and FitCheck. And because there's so much hype and misinformation about what's actually possible that I think an honest perspective is valuable.
If your "building something" is a dog walking business, a coaching practice, a nonprofit, or a creative pursuit - the fundamentals from Part 3 (product-market fit, validation, go-to-market) will apply directly. You can skim this part or skip to Part 3.
But if you've ever had a software idea and thought "I can't build that, I'm not a developer" - read on. The landscape has genuinely changed.
Before We Dive In: Building Isn't Just Software
Let me be clear about something: the AI tools revolution doesn't mean everyone should build software.
If your thing is opening a bakery, starting a tutoring business, becoming a consultant, or creating art - do that. Those are real ventures that create real value. They don't require AI coding tools. They require the same things they've always required: understanding your market, delivering value, finding customers, managing the work.
Maybe now's the time to start that family you've been putting off. Maybe it's the time to finally commit to volunteer work you've always wanted to do. Maybe it's time to take up painting, or start writing, or train for a marathon you've been dreaming about.
All of these are forms of building. Building a life. Building meaning. Building something that matters to you.
The process is the same regardless of what you're building: honest self-assessment, planning, taking action, iterating based on feedback. The specific tools differ - but the fundamentals don't.
What follows is about one specific set of tools for one specific type of building. It's not the only path, and it's not the right path for everyone. But for those who've had software ideas they couldn't pursue because they couldn't code - this is worth understanding.
The AI Software Building Hype (And Reality)
You've probably heard the hype: AI can build anything now. You don't need developers. Just describe what you want and the robots will create it.
Let me give you the honest version, from someone who's actually building real products this way.
ReApply and FitCheck - the platforms I've been discussing throughout these blog posts - were built by a non-developer using AI coding tools. That's not marketing spin. I don't have a computer science degree. I've never worked as a software engineer. I can't whiteboard algorithms or explain big-O notation.
What I can do: I've run technology projects for 25 years. I understand systems. I know how to break complex problems into components. I can read code well enough to understand what it's doing. And I have deep domain expertise in the problem I'm solving.
That combination, plus AI tools, was enough to build production software that real people use.
But - and this is critical - it wasn't magic. It wasn't "describe your app and watch it appear." It was something more nuanced.
What Actually Changed
Let me be precise about what AI coding tools have done, because the discourse swings between "AI replaces all developers" and "AI is just autocomplete." Neither is right.
Before AI coding tools (pre-2023):
If you had a software idea and couldn't code, your options were:
- Learn to code (months to years)
- Hire developers (expensive, communication overhead)
- Find a technical co-founder (rare, requires equity split)
- Use no-code tools (limited functionality, vendor lock-in)
Each path had significant barriers. Learning to code is a real investment. Hiring developers requires capital and the ability to evaluate work you don't fully understand. Technical co-founders are in high demand. No-code tools cap what you can build.
After AI coding tools (now):
A new path exists: work with AI to build software you couldn't build alone.
Not "tell AI what to build and watch it happen."
Work with AI. Iteratively. Back and forth. Debugging together. Understanding what it produces. Guiding it toward what you actually need.
This path requires different skills than traditional development. But they're skills that people with domain expertise, design skills/talent, and project/product management experience often already have.
The Tools Landscape
There are three major AI coding tools worth knowing about:
Claude Code (Anthropic)
This is what I use. Claude Code is Anthropic's command-line tool that gives Claude direct access to your codebase - it can read files, write code, run commands, and iterate on solutions.
Strengths: Best reasoning I've found. Handles complex architectural decisions. Understands context deeply. Can work through multi-step problems.
Limitations: More technical interface (command line). Steeper learning curve. Requires understanding what you're asking for.
Best for: People who are comfortable with technical concepts even if they can't code. Complex projects that need sophisticated reasoning.
Cursor
A code editor (based on VS Code) with AI deeply integrated. You write alongside AI suggestions and can chat about your code.
Strengths: Visual interface. Good for working within existing codebases. Tab-completion workflow feels natural.
Limitations: Less suited for building from scratch. AI context is more limited.
Best for: Developers augmenting their work. Iterating on existing code. People who prefer visual environments.
Windsurf
Another AI-integrated editor, competing with Cursor.
Strengths: Strong integration. Good for iterative development. User-friendly interface.
Limitations: Similar category as Cursor - better for enhancement than ground-up building.
Best for: Similar use cases to Cursor. Worth trying both to see which fits your style.
My honest assessment: Claude Code is the most powerful but also the most demanding. If you're comfortable with technical concepts and willing to learn, it unlocks the most capability. If you want something more approachable, Cursor or Windsurf lower the barrier but also somewhat limit the ceiling.
What AI Can Actually Do
Let me be specific about what's genuinely possible:
Generate working code from descriptions.
You can describe a function, a feature, or even a full system architecture, and get working code back. Not perfect code, but working code that you can test and iterate on.
Explain code you don't understand.
This is huge for non-developers. AI can read code and explain what it does, why it might be doing it that way, and what would happen if you changed things.
Debug problems.
When something breaks, AI can analyze error messages, trace through logic, and identify issues. It's not always right, but it dramatically accelerates the debugging process.
Refactor and improve existing code.
AI can take working-but-messy code and restructure it to be cleaner, faster, or more maintainable.
Learn new technologies quickly.
Need to integrate with an API you've never used? AI can show you how, explain the patterns, and generate the integration code.
Maintain context across a project.
Modern AI tools can understand your entire codebase, not just the file you're looking at. This matters a lot for consistency.
What AI Can't Do
Here's where the hype falls apart:
Give you good ideas.
AI can build what you describe. It cannot tell you what's worth building. The hardest part of entrepreneurship - identifying problems worth solving - is still entirely human.
Replace domain expertise.
If you don't understand the problem you're solving, AI can't save you. All the code in the world won't help if you're building the wrong thing.
Make strategic decisions.
Architecture choices, technology selection, build vs. buy, what to prioritize - these require judgment that AI can inform but not replace.
Ensure quality automatically.
AI generates code that often works but sometimes has subtle bugs, security issues, or performance problems. Someone needs to verify quality. If you can't evaluate what it produces, you're in trouble.
Handle truly novel problems.
AI is trained on existing patterns. For problems that don't fit established patterns, AI struggles. The more novel your domain, the more human judgment matters.
Keep you from technical debt.
Without guidance, AI will generate working code that becomes unmaintainable. Someone needs to think about code organization, documentation, and long-term structure.
The Real Unlock: Planning and Editing
Here's the insight that took me months to fully understand, and it's the thing that separates people who succeed with AI tools from people who get frustrated and give up:
You have to become an exceptional planner and an exceptional editor.
This is the part the hype misses entirely. People hear "AI can write code" and imagine describing their idea and watching it materialize. That's not how it works. What actually works is this:
Every project needs a detailed plan.
For software, that means a real product requirements document (PRD). Not "I want an app that does X." A detailed breakdown: What are the user flows? What are the integration points? What data needs to be stored and how? What are the edge cases? What does success look like? How will you verify it works?
For every new feature, you need the same rigor: What's the use case? How does this fit with existing functionality? What might break? What are the acceptance criteria?
AI works brilliantly when you give it a clear plan. AI hallucinates when it has to invent missing elements of your plan. The more gaps in your thinking, the more the AI fills those gaps with plausible-sounding nonsense.
Your plan must include verification.
How do you know the thing works? This is where engineering discipline matters. You need tests - ways to verify the plan was actually successful. Not just "it runs without errors" but "it does what I intended under the conditions I care about."
This is also where many people get ground down. They don't understand what testing means in software. They don't understand what "security hardening" involves. They don't know about edge cases, race conditions, or the dozen other things that separate code that works in a demo from code that works in production.
All of these concerns need to be in your plan. If they're not, AI won't add them - or worse, it will add something that looks right but isn't.
You have to be a relentless editor.
AI produces output. That output is not final. It's a first draft - often a pretty good first draft, but a draft nonetheless.
Your job is to review everything. Does this make sense? Does it match what I asked for? Does it follow the patterns established elsewhere in the codebase? Is it secure? Is it maintainable? Will I understand this in six months?
You're not writing the code, but you're responsible for the code. That means reviewing, questioning, and refining constantly.
The Uncomfortable Truth About Ignorance
Here's something people don't want to hear: even though AI is building the software, you need to understand how building software works.
You don't need to know how to write code from scratch. But you need to know the concepts. You need to understand what "separation of concerns" means. What "KISS" (Keep It Simple, Stupid) implies. What technical debt is and why it matters. What security vulnerabilities look like. What good architecture patterns are.
Why? Because your ignorance will be reflected back at you - often multiplied.
If you don't know that putting secrets in code is a security risk, AI won't stop you from doing it. If you don't understand database indexing, your queries will be slow and you won't know why. If you don't understand authentication patterns, you'll build something that's trivially hackable.
AI is an incredibly powerful tool. But like any powerful tool, it amplifies what you bring to it. Bring knowledge and clear thinking, and it amplifies your capabilities 10x. Bring confusion and gaps in understanding, and it amplifies those too.
Think about it this way: a chainsaw is an incredibly powerful tool. It doesn't mean anyone can be a lumberjack. If you don't understand how trees fall, which way to cut, how to avoid kickback - the chainsaw will punish your ignorance.
AI coding tools are the same. The more powerful the tool, the more it rewards expertise and punishes ignorance.
This means education is part of the job.
You need to spend time learning software development concepts. Not to become a developer - to become an effective user of development tools. Read about best practices. Understand why patterns exist. Learn the vocabulary so you can communicate clearly with AI.
The good news: AI itself is an incredible learning tool. Ask it to explain concepts. Ask why something is done a certain way. Ask about trade-offs. Use it to educate yourself as you build.
But don't skip this step. The fantasy of "I'll just describe what I want and AI handles the rest" is a fantasy. The reality is: you become a well-informed director of AI work, or you become someone who builds things that don't work.
The illusion of full automation.
I see this question constantly from people using agentic coding tools like Claude Code: "What do I do while it's working? This is boring."
My answer: you watch it. You pay attention. You understand what it's doing.
If it starts going in the wrong direction, you stop it. If it misunderstands something, you catch it early before it builds on a flawed foundation. If it's doing something you don't understand, you ask why.
This cannot be hands-off. You cannot let agents run amok in your codebase while you scroll your phone or surf the web. Maybe a seasoned developer with years of experience managing junior engineers can set up guardrails and let things run more autonomously. But someone just starting? That's a recipe for a mess you won't know how to fix.
There's a paradox here: the tools look automated, so people assume they can be hands-off. But the automation is an illusion. The AI is doing the typing, but you're still responsible for the thinking.
You need to stay engaged. Stay focused. Watch what's happening so you can course-correct in real time - and so you actually understand what's being built.
Because here's the thing: you have to maintain this code. You have to debug it when something breaks. You have to extend it when you need new features. If you weren't paying attention while it was built, you'll have no idea how it works - and you'll be stuck.
Software is details. Details matter. Details will kill you if you're not paying attention. This is where people who aren't genuinely interested in the work struggle most. If you find watching code being written boring, if you just want the end result without understanding the process - this might not be your path.
The Skills That Actually Transfer
Here's the interesting part: the skills that let you build effectively with AI aren't traditional coding skills. They're skills that experienced professionals often already have.
Breaking problems into components.
If you've run projects, you know how to decompose complex goals into workable pieces. That same skill applies to working with AI: give it bite-sized problems, not everything at once.
Clear communication.
The better you can describe what you want, the better AI output you get. Vague requirements produce vague code. Specific requirements produce specific solutions. This is just like managing teams.
Testing and verification.
Can you tell if something works? Can you define what "working" means for your use case? This is more important than being able to write the code yourself.
System thinking.
Understanding how pieces fit together, where dependencies are, what happens when one thing changes - this is essential for architecting software, and it's what many non-technical people who've run complex projects already know.
Iteration and refinement.
The first version is never right. Getting to good requires cycles of feedback and improvement. If you're comfortable with iterative processes, AI development will feel familiar.
The Skills You Need to Develop
But it's not all transferable. Some things you'll need to learn:
Reading code well enough to understand it.
You don't need to write code from scratch. But you need to read what AI produces and have a rough sense of whether it makes sense. This is learnable, and AI can help teach you.
Basic technical concepts.
Databases, APIs, hosting, security - you don't need deep expertise, but you need to know these things exist and roughly what they do. Otherwise you can't ask the right questions.
Debugging mentality.
When something doesn't work, you need patience and systematic thinking to figure out why. This is a mindset as much as a skill.
Tool familiarity.
Command lines, code editors, version control - the actual interfaces of development. Less complex than the conceptual stuff, but requires some learning curve.
Knowing what you don't know.
The Dunning-Kruger effect is real. Early on, you'll think things are fine when they're not. Learning to recognize when you're out of your depth is critical.
How I Actually Work
Let me give you a sense of what building with AI looks like in practice, using ReApply as an example.
Starting with architecture.
Before writing any code, I spent time with Claude thinking through system architecture. What are the components? How do they interact? What technologies make sense? This was a conversation, not a specification - we explored options together.
Incremental building.
I never asked AI to "build the whole app." I built piece by piece. "Let's create the user authentication system." Test it. "Now let's add job posting parsing." Test it. Each step was contained enough to understand and verify.
Constant testing.
After any change, I test. Not automated tests (though those too) - manual verification that things work as expected. AI is good at generating code. It's not reliable at generating correct code. Trust but verify.
Learning as I go.
When AI produces something I don't understand, I ask it to explain. "Why did you structure it this way? What would happen if we did it differently?" This serves two purposes: I learn, and I verify the AI actually knows what it's doing.
Course correction.
Sometimes AI goes in a wrong direction. Sometimes I realize I asked for the wrong thing. Building is non-linear. Expecting perfect output on the first try isn't realistic.
Documentation as I build.
AI helps write documentation, but I review and refine it. This serves future me, but it also forces me to understand what we built.
The "Company of One" Possibility
Here's what excites me about this moment: the economics of building have fundamentally shifted.
A decade ago, building a real software product required:
- A development team (or expensive contractors)
- Months of runway before launch
- Significant capital to get to market
Now, a single person with domain expertise can build production software. Not toy software - real products that compete with venture-backed companies.
This isn't hypothetical. ReApply competes with companies that have raised millions of dollars. I'm one person. The AI handles the coding I can't do. I handle everything else.
I think we're going to see an explosion of "companies of one" - individuals building real businesses, serving real customers, without the traditional infrastructure. Not because they're coding geniuses, but because they understood a problem deeply and had the tools to build a solution.
The paradigm of needing a development shop to build software products is ending. Not for everyone, not for everything - but for a meaningful subset of businesses, the old rules no longer apply.
What You Actually Need (The Honest Assessment)
So, can AI let you build software? Here's my honest assessment of what you need:
Must have:
- A clear problem you understand deeply
- Ability to break complex problems into pieces
- Patience for iterative development
- Willingness to learn technical concepts (not coding - concepts)
- Time to invest in the learning curve
- Tolerance for frustration when things don't work
Helps a lot:
- Prior experience managing technical projects
- Understanding of how software systems work (even at a high level)
- Background in structured problem-solving
- Persistence through ambiguity
- Some exposure to code (even just reading it)
Not required:
- Computer science degree
- Prior development experience
- Ability to write code from scratch
- Deep technical expertise
Myths:
- "Just describe what you want and AI builds it" - No. It's collaborative and iterative.
- "Anyone can do it" - Not anyone. People with certain skills and mindsets.
- "It's easy" - It's easier than learning to code. It's not easy.
- "AI code is always buggy" - Not always, but often enough that verification matters.
The Hype vs. Reality Summary
Overhyped claims:
- "AI will replace all developers" (False - human judgment still critical)
- "You can build anything just by describing it" (False - requires iteration and guidance)
- "Technical knowledge is obsolete" (False - you need concepts, just not deep coding skills)
- "No learning curve" (False - real but different learning required)
Underhyped realities:
- How much AI accelerates learning technical concepts
- The quality of AI explanations for understanding code
- How transferable existing professional skills are
- The real possibility of single-person software companies
- How quickly the tools are improving
The accurate picture:
AI coding tools have genuinely democratized software development - not to "anyone can do it" but to "a larger group of people with relevant skills can now do it." That larger group includes many people who were previously locked out.
If you have domain expertise, structured thinking, and tolerance for learning, you can probably build software you couldn't build before. Not easily, not immediately, but actually.
Getting Started (If You Want to Try)
If this has intrigued you enough to try, here's a practical starting point:
Week 1-2: Get familiar with the tools.
Sign up for Claude and have conversations about technical concepts. Explore Claude Code if you're comfortable with command line, or try Cursor/Windsurf for visual editing. Don't try to build anything yet - just learn the interfaces.
Week 3-4: Build something tiny.
Start with something small and self-contained. A simple tool. A basic automation. The goal isn't to build your big idea - it's to learn the workflow. Expect frustration. Push through it.
Month 2: Expand gradually.
Build something slightly more complex. Learn about databases, APIs, hosting through the process. Ask AI to explain everything you don't understand.
Month 3+: Consider your real project.
Only after you understand the workflow should you tackle something meaningful. Even then, start with the simplest possible version.
The mistake most people make is trying to build their big idea immediately. That's like trying to run a marathon before you can run a mile. Get comfortable with the tools first.
The Real Question
Part 1 of this series asked: is this your moment to build something?
Part 2 has tried to answer: is building something technically feasible for you?
For many people reading this, the answer is yes - but with caveats. Yes, AI tools have made building possible for non-developers. No, it's not magic. Yes, you can probably do it. No, it won't be easy.
The question that remains: even if you can build something, should you? What would you build? How would you know if it's worth building?
That's Part 3: the practical playbook for going from idea to actual traction.
Exploring Your Options While Figuring Things Out?
Whether you're building something or finding something, get clarity on where you stand.
FitCheck: 10 free checks/month • ReApply: Free to start
Continue the Series
Build Something Series
Related Articles
Is It Time to Build Something? Recognizing the Moment
Part 1: How to know if this career disruption is your moment to start something of your own
The Unexpected Opportunity
How a layoff can become a career turning point - includes "The Founder Question" that inspired this series
Getting Back on Your Feet
Job searching after a layoff - what's changed in hiring and how to search strategically
The Entry-Level Paradox
How to get your first job when every job needs experience
About the Author
John Coleman is the founder of ReApply and FitCheck - products built by a non-developer using AI tools. He's been building companies for 25 years across six startups and believes that the democratization of building is one of the most significant shifts in entrepreneurship in decades.