The Botty Shop Cometh: How CTOs Get Burned by Fake Dev Teams
It's 7:30 AM, and I'm already on a call with a CTO whose "world-class development team" just delivered a critical feature that's riddled with errors.
Everyone is confused. It was ostensibly written by experienced developers, the unit test reports are clean, the documentation is solid, but something feels off. After some digging, we discover the offshore consultancy didn't just use AI tools—they were AI tools, with a thin veneer of human oversight.
Welcome to the era of "botty shops"—my term for the new breed of development consultancies that are fundamentally AI-driven but don't always disclose it. It's the modern evolution of the old "body shops" from the 90s, except now you're not just getting a warm body—you're getting an LLM pretending to be one.
Builder.ai: When $1.5B in Funding Meets Reality
Let's talk about the elephant in the room. AI generated code is getting there, but it’s a very long way from being foolproof.
Using AI as primary development talent has already failed in spectacular fashion. Builder.ai—once a darling unicorn with nearly $1.5 billion in funding—filed for bankruptcy in May. While the media focused on the financial fraud (they inflated $50M in actual revenue to $220M on paper), the real story is more nuanced. Contrary to viral social media claims, Builder.ai didn't fake their AI entirely. They had a legitimate AI team of 15-30 engineers building real code generation tools. The problem wasn't fake AI—it was fake revenue and a fundamental misunderstanding of what AI could actually deliver at scale.
The takeaway is that even legitimate, well-backed AI development companies are struggling with the economics and execution. If a well-funded unicorn can't make it work, what does that say about the hundreds of smaller "botty shops" emerging in the market?
(Sidebar: this is also a cautionary tale for solopreneurs and embryonic startups planning to depend primarily on AI for development “talent”).
The Bigger Picture
That’s one very extreme example—still very unsettling. But how does that lead to concerns about a broad rise of shoddy, unscrupulous botty shops?
The answer is in the market potential for AI-assisted development, plus greed. The global AI-driven development consultancy market is growing like crazy, with the AI consulting services market already above $11B and projected to reach $73 billion by 2033 (26.49% CAGR). $10+ billion has already been invested in "virtual labor" technologies within the outsourced development industry. 35+ major service providers now offer AI-driven development options, and AI-driven development services typically promise 60% in cost savings over traditional offshore development shops.
Those are market dynamics that are eye-popping even for venture capitalists. That’s on top of the fact that AI in general is just smoking-hot: AI startups captured $100.4 billion in global VC funding in 2024, representing 37% of all venture capital. That means there will be a lot of companies tapping into the AI-assisted development market legitimately—perhaps not doing it well, but at least under the legitimizing and watchful eye of a VC board.
Whenever an emerging market blows up like that, many “knock-off” companies start working the angles to opportunistically dip into the market opportunity. The spectrum upon which these knock-off companies value ethics usually range from “questionable” to “openly fraudulent”. In every hot new market, many knock-offs and copy-cats will be unscrupulous, just trying to extract money from the market quickly, usually preying on the naive and the inattentive.
Speaking of unaware, here's something that should keep you up at night: less than 30% of even legitimate outsourced development services clearly disclose their AI usage in client contracts. If three out of ten legit shops aren’t transparent, you can bet that botty shops will have granite-like opacity.
Besides the obvious issues around quality, there’s a bigger and potentially more deadly issue: you might be on the hook for violating evolving AI regulations—whether your use of a botty shop is deliberate or accidental.
The Regulatory Storm That's Brewing
The AI regulatory landscape is shifting quickly, and it's going to catch a lot of companies off guard. In the United States, more than forty states have introduced AI bills that included disclosure.
The Utah AI Policy Act (May 2024) requires disclosure of generative AI use in advertising and other consumer communications. Next door, the Colorado AI Act (effective Feb 2026) mandates "reasonable care" to protect consumers from algorithmic discrimination. In California, CPPA regulators are pushing to allow consumers to opt out of AI-related decision processes entirely. But those are just consumer mandates, what about the even bigger picture?
The EU, true to form, is taking an even broader and more pronounced stance on AI. The EU AU Act (August 2024) demands strict transparency requirements, especially on products covered by EU laws in Annex I—and the list is extensive. Let’s take, for example, a toy manufacturer who has a companion app written for a new toy: they have strict requirements around AI disclosure (among other things), even if they use offshore developers for that companion app. This stuff doesn’t just apply to EU-based companies, by the way—like GDPR, it applies to anyone selling to citizens of the EU.
This is all just starting. If you want to see where it goes from here, look at Internet privacy. The progression from the 1995 EU Data Protection Directive to the myriad data privacy regulations today indicates how broad and deep AI regulations will go.
The key takeaway: the longer you wait to get a handle on an AI compliance program, the longer it will take to clean up later—especially if using AI development services. Staying on top of AI use by your outside developers will directly impact the ease or difficulty of future compliance.
Tread carefully—hic sunt dracones.
Red Flags: How to Spot a Botty Shop
When you're hiring an offshore development team, you're not just buying code—you're potentially buying into a regulatory compliance nightmare if they're not transparent about their use of AI. Here are some top warning signs of an AI-heavy bottyshop trying to pass themselves off as 100% all-natural humans:
1. Pricing That Doesn't Add Up
Rates 60-80% below market average
Fixed-price estimates for complex, ambiguous requirements
No questions about edge cases or technical specifications
2. Unrealistic Speed Claims
"We'll have your MVP ready in 2 weeks"
No mention of testing, QA, or iterative development
Delivery timelines that ignore fundamental software development realities
3. Generic Team Profiles
LinkedIn profiles that all look suspiciously similar
No specific technical depth in team member backgrounds
Reluctance to schedule technical interviews with actual developers
4. Communication Patterns
Responses that feel slightly "off" or overly polished
Inability to go deep on technical architecture questions
Consistent quality in written communication across all team members
5. Contract Language Red Flags
No mention of AI usage in service descriptions
Vague deliverable definitions
Unusual IP ownership clauses
6. Emoticons Everywhere
For some reason, AI LLMs love using emoticons
Code, documentation, proposals are emoticon-laden
“I don’t know why those emoticons are in your unit tests…”
Watch out for the red flags. An impressive-looking development shop with fee structures too-good-to-be-true probably is. What is true is that the cheapest option is rarely the cheapest option (I’ll spare you the “open source ain’t free” rant). A botty shop might save you 60% upfront, but consider the hidden costs—technical debt, security vulnerabilities, regulatory compliance, intellectual property complications—the list goes on.
And believe me, watching your newly hired permanent developers try to make sense of poorly-documented AI spaghetti code is not a joy. Get hustled by a botty shop and you can forget about your aggressive launch dates—you’ll be lucky if your team doesn’t have to start over from scratch.
A Due Diligence Checklist
Phase 1: Initial Screening
Direct questioning: "Do you use AI tools in your development process?"
Process documentation: Request detailed workflows and methodologies
Team verification: Insist on video calls with actual developers
Reference checks: Get specific technical references, not just testimonials
Phase 2: Technical Deep Dive
Code review: Examine samples of actual work, not demos
Architecture discussions: Can they explain technical decisions?
Problem-solving: Present real technical challenges during interviews
Tool transparency: What development tools and platforms do they use?
Phase 3: Contractual Protections
AI disclosure clauses: Require explicit disclosure of AI usage
Performance standards: Define quality metrics beyond just delivery
Audit rights: Reserve the right to review development processes
Liability allocation: Ensure appropriate responsibility for AI-generated issues
A Bit More On Contract Clauses
When you go to your legal team, it helps a lot to have a straw-man or some kind of starter language to work with. In my experience the only think lawyers love more than billing hours is rewriting someone else’s language. Based on my experience and current regulatory trends, your legal team might consider language like the below in every development services contract.
AI Disclosure Requirements: “Contractor must disclose any use of artificial intelligence, machine learning, or automated code generation tools in the performance of services, including but not limited to: the specific AI tools used, the extent of AI involvement in deliverables, and any training data that may impact intellectual property rights.”
Quality Assurance Standards: “All AI-generated code must be reviewed by qualified human developers and tested according to industry standards. Contractor warrants that deliverables will not exhibit bias, discrimination, or other AI-related issues.”
Compliance Responsibility: “Contractor shall ensure compliance with all applicable AI regulations and disclosure requirements, including but not limited to the EU AI Act, US state AI legislation, and industry-specific requirements included in Appendix A.”
Important: I am not a lawyer and this is not legal advice—it’s a suggestion that you put this in front of your legal team. They need to make the final decisions and craft the right language for your situation.
Engage, But Engage Smart
Don't avoid AI-powered development services, but go in with your eyes open. Ask the hard questions, demand transparency, and build appropriate contractual protections.
Transparency is non-negotiable: Demand clear disclosure of AI usage in development services
Price isn't everything: Factor in hidden costs of AI-generated code and regulatory compliance
Due diligence is critical: Technical interviews, reference checks, and contract reviews are more important than ever
Regulatory compliance matters: AI disclosure requirements are coming whether you're ready or not
Human oversight is essential: AI tools are powerful but still require skilled human developers for quality and security
The companies that figure out how to leverage AI responsibly—with proper disclosure, quality controls, and human oversight—will have a significant competitive advantage. Those that try to over-leverage AI without transparency (which I consider fraud) will eventually face the same fate as Builder.ai.
And as the CTO, your company is depending on you to get it right. Be ready.
Bottom Line: Adapt or Get Burned
The AI revolution isn't going away—it's accelerating. I predict that by the end of 2026:
Regulation will catch up: Disclosure requirements—and much more—will become standard. Be ready.
Hybrid models: The most successful firms will blend AI and human expertise transparently.
New pricing models: We'll see more outcome-based pricing vs. traditional hourly rates (by the way, if someone is pricing per line of code—run).
AI-assisted development shops will mature. This model will become understood, best practices will emerge, and AI-assisted development services will become the norm. Until them, beware the trend already unfolding—unscrupulous hustlers exploiting demand for cheap development services.
And thus unto the world shall rise a great horde—of botty shops.
Need help evaluating AI-powered development services? Want to discuss specific red flags you've encountered? I help clients navigate evaluation processes. Avoiding the pitfalls now is better than waiting for rescue later. Get in touch.