Skip to content
Doppelganger logo
Open menu
Software Isn’t a Competition: It’s a Map of Our Frustrations

Software Isn’t a Competition: It’s a Map of Our Frustrations

Before I get into this, I want to be clear: I am not trying to market anything here. I’m writing this as a developer talking to other developers. I’m not wearing a "founder" hat; I’m wearing the hat of someone who spent way too many late nights staring at a terminal, frustrated that I couldn't get two pieces of software to talk to each other the way I wanted. I’m talking as me, because I think the way we talk about software "competition" is fundamentally broken.

If you spend any time in developer circles—Hacker News, Reddit, Twitter—you see this constant, exhausting need to rank everything. We want to know what the "best" framework is, what the "Selenium killer" of the week is, or why anyone would use Tool X when Tool Y exists. We treat software development like a high-stakes sports league where only one project can hold the trophy at the end of the season.

But after building Doppelganger, I’ve realized that "better" is a pretty useless word in engineering.

Software doesn’t actually evolve in a straight line from "bad" to "good." It doesn't move toward some objective perfection. It evolves horizontally. It branches out to fill specific, painful gaps that the previous generation of tools—no matter how powerful or polished—was never designed to bridge. No tool truly replaces another; it just offers a different set of trade-offs for a different set of problems.

The Architecture of the "Gap"#

This isn't just about browser automation; it’s the fundamental law of how we build software.

Think about the history of the cloud. We didn't move from physical on-premise servers to Virtual Machines because VMs were "better" at processing bits; we moved because VMs filled the Scaling Gap. They allowed us to slice up hardware. Then we moved to Docker, not to replace VMs, but to fill the Portability Gap. Docker solved the "it works on my machine" problem.

In the frontend world, we didn't move from jQuery to React because jQuery was "broken." We moved because as web apps became massive, React filled the State Management Gap. jQuery was never meant to handle a complex, data-driven dashboard, just like a hammer isn't meant to turn a screw.

Every tool is just a crystallized set of opinions. It’s a map of what the creator cared about most, and—more importantly—what they were willing to sacrifice to get there.

A History of Specialization in Browser Tech#

To see this in action, you just have to look at the "Browser Wars" of the last decade. Everyone argues about which library is superior, but if you look closer, they are all just solving different flavors of frustration:

  • The Compatibility Gap (Selenium): Selenium was born when the web was a fragmented, inconsistent mess. It wasn't built to be fast; it was built to be a universal translator so a site worked on IE6, Safari, and Firefox alike. It still handles legacy enterprise grids better than almost anything else because that was its primary "gap."
  • The Reliability Gap (Playwright/Puppeteer): As the web moved toward heavy Single Page Applications (SPAs), the old WebDriver protocols started to feel flaky. These tools didn't "kill" Selenium; they just prioritized speed and direct engine synchronization (CDP) because developers were frustrated with timing issues and "wait-for-element" loops.
  • The Infrastructure Gap (Browserless): Writing the script is only half the battle. Anyone who has tried to run headless Chrome in a production Docker container knows the nightmare of memory leaks, zombie processes, and resource spikes. Browserless didn't try to rewrite the Playwright API; they just filled the gap between "it works in my IDE" and "it works at scale."
  • The Distribution Gap (Apify): They looked at the world and saw that while writing a scraper was hard, scaling it and making it usable for someone without a terminal was harder. They filled the gap of accessibility and monetization.

The AI Trade-offs: Accuracy vs. Efficiency#

We’re seeing this repeat right now with the explosion of AI browser agents. It’s the same cycle, just faster.

For a while, Skyvern was the main player. They used computer vision to solve the Resilience Gap. By letting the AI "see" the screen instead of reading DOM selectors, they made automation that didn't break when a developer changed a CSS class. It was revolutionary, but vision is heavy, computationally expensive, and often slow.

Then Browser Use arrived. Is it "better" than Skyvern? Maybe not in terms of raw visual reasoning or high-end complexity. But it filled an Efficiency Gap. It realized that for 80% of tasks, you could trade a bit of that high-end vision for something cheaper, faster, and easier to implement.

It’s a fork in the road, not a replacement. You don't buy a Ferrari to haul lumber, and you don't buy a truck to win a drag race. Neither tool is "wrong"—they just cater to different priorities.

Why I Built Doppelganger: The "Integration Gap"#

This brings me to the actual reason I started this project. I hit a very specific wall when I was trying to connect n8n to any website. I looked at the landscape and realized there was a massive hole in the middle of it.

If I wanted an agent to interact with a site, my options felt fundamentally broken for my use case. I basically had two choices:

  1. Hand over full control to a heavy AI browser agent that had to "reason" its way through the page every single time it loaded. It was like hiring a person to navigate a website for me—slow, expensive, and unpredictable.
  2. Manually deploy and manage entirely separate agents or microservices for every single task.

There was no middle ground. There was no way to just call an endpoint as if the browser was an unlimited, local API. I didn't want a "robot person" living inside my browser; I wanted a server I could talk to that happened to have a browser attached to it.

I wanted to fill the Integration Gap.

The core philosophy of Doppelganger is that agents shouldn't be navigating browsers; they should be calling functions. I wanted to predefine a task—like "fetch the last three invoices" or "check the status of this shipment"—and turn that into a self-hosted API endpoint.

Now, instead of an n8n workflow or a LangChain agent trying to "be a human" and fumbling through a UI, it just sends a JSON request to a URL. Doppelganger handles the browser mess in the background and sends back the data. It’s deterministic, it’s fast, and it’s self-hosted. It’s not trying to replace the AI’s brain; it’s trying to give the AI a better pair of hands.

Final Thoughts#

I think we’d all be better off if we stopped looking at software as a competition. Every tool in your stack exists because someone, somewhere, got frustrated by a very specific limitation and decided to build a bridge over it.

I’m not trying to win a "Browser War" or market the next "industry-standard" tool. I’m just trying to build the bridge I needed to make my own workflows work. If you’ve been frustrated trying to force a browser into a structured workflow without it costing a fortune or breaking every five minutes, maybe this bridge is for you, too. No marketing, no hype—just a different tool for a different gap.