Task Modes: Scrape vs Agent vs Headful
Learn the differences between Scrape, Agent, and Headful task modes in Doppelganger, including capabilities, use cases, and best practices.
Mnemosyne Doppelganger provides three task modes to cover a wide range of automation scenarios. Each mode is designed for different levels of interaction and complexity.
1. Scrape Tasks#
Scrape tasks are designed for simple, fast data extraction from a single URL or a list of URLs.
Key Features#
- Minimal inputs: Only requires
URLand optionalwaittime. - Fast execution: Runs quickly without additional blocks.
- Optional extraction scripts: JavaScript can be added to process DOM or Shadow DOM content.
- Local data storage: Data is stored on your machine for privacy.
Ideal Use Cases#
- Pulling product listings or prices
- Monitoring web pages for changes
- Extracting static content from sites
2. Agent Tasks#
Agent tasks enable multi-step, programmatic automation using blocks and simulated human behaviors.
Key Features#
- Blocks-based workflow: Navigate, Click, Type, Press Keys, Extract, Run JS
- Human-like behaviors: Typing delays, scroll patterns, optional typos
- Dynamic input: Supports variables for parameterization
- Optional extraction scripts: Clean or structure complex data into JSON
Ideal Use Cases#
- Multi-step form filling and workflows
- Automated browsing with authentication
- Complex automation requiring interaction with multiple page elements
3. Headful Tasks#
Headful tasks are manual, interactive browser sessions where a human operates the browser directly. They are not agentic and do not run automated blocks.
Key Features#
- Visible browser: Tasks run in a fully visible browser window
- Human interaction only: No automation blocks executed
- Supports inspection and debugging: Run extraction scripts manually if needed
- Optional extraction scripts: Can be triggered by user input during the session
Ideal Use Cases#
- Manually exploring websites while capturing data
- Debugging and testing automation scripts
- Interacting with sites that require direct human input
Comparison Table#
| Mode | Inputs | Blocks | Human-like Behavior | Extraction Scripts | Visibility | Ideal For |
|---|---|---|---|---|---|---|
| Scrape | URL + wait | No | No | Optional | Headless | Fast, single-page data extraction |
| Agent | URL + wait + blocks | Yes | Yes | Optional | Headless | Multi-step automation, dynamic workflows |
| Headful | Manual | No | Human only | Optional | Visible | Debugging, human-operated interaction, exploration |
Best Practices#
- Start with Scrape for simple extraction tasks
- Use Agent when multiple steps or simulated human behavior are needed
- Switch to Headful for manual exploration, debugging, or direct interaction
- Leverage extraction scripts to process dynamic or shadow DOM content
- Parameterize with variables to make Agent tasks reusable across inputs
Understanding these task modes ensures you choose the most efficient and reliable workflow for your automation needs with Mnemosyne Doppelganger.