Skip to main content

Overview

When you need to scrape multiple pages, fill multiple forms, or process a batch of URLs, firing requests concurrently can significantly speed up your workflow. This guide shows how to run multiple Mino runs in parallel using the sync API.

Basic Example

Fire multiple requests concurrently and gather results:
import asyncio
import aiohttp

async def run_mino(session, url, goal):
    """Run a single mino run and return the result"""
    async with session.post(
        "https://mino.ai/v1/automation/run",
        headers={
            "X-API-Key": "YOUR_API_KEY",
            "Content-Type": "application/json",
        },
        json={
            "url": url,
            "goal": goal,
        },
    ) as response:
        result = await response.json()
        return result.get("result")

async def main():
    # Define your batch of tasks - scraping multiple sites
    tasks_to_run = [
        {
            "url": "https://scrapeme.live/shop/",
            "goal": "Extract all available products on page two with their name, price, and review rating (if available)"
        },
        {
            "url": "https://books.toscrape.com/",
            "goal": "Extract all available books on page two with their title, price, and review rating (if available)"
        },
    ]

    # Create session and fire all requests concurrently
    async with aiohttp.ClientSession() as session:
        tasks = [
            run_mino(session, task["url"], task["goal"])
            for task in tasks_to_run
        ]

        # Wait for all tasks to complete
        results = await asyncio.gather(*tasks)

        # Process results
        for i, result in enumerate(results):
            print(f"Task {i + 1} result:", result)

# Run the async main function
asyncio.run(main())
The sync /run API is perfect for concurrent requests - you get clean, simple code without SSE stream handling, making it ideal for batch operations with asyncio.gather() or Promise.all().

Batch Multiple Forms

Fill multiple contact forms concurrently:
Python
async def main():
    companies = [
        {"name": "Acme Corp", "url": "https://acme.com/contact"},
        {"name": "TechStart", "url": "https://techstart.io/contact"},
        {"name": "BuildIt", "url": "https://buildit.com/contact"},
    ]

    async with aiohttp.ClientSession() as session:
        tasks = [
            run_mino(
                session,
                company["url"],
                f"""Fill in the contact form:
                    - Name field: "John Doe"
                    - Email field: "john@example.com"
                    - Message field: "Interested in partnership with {company['name']}"
                    Then click Submit and extract the success message.
                """
            )
            for company in companies
        ]

        results = await asyncio.gather(*tasks)

        for company, result in zip(companies, results):
            print(f"{company['name']}: {result}")

Gotchas and Caveats

Concurrency Limits: Each user account has a concurrency limit for simultaneous browser sessions. When you exceed this limit, additional requests will be queued automatically rather than returning a 429 error.

Queueing Behavior

When you hit your account’s concurrency cap:
  • No 429 errors: Unlike traditional rate-limited APIs, Mino won’t reject your request with a 429 status code
  • Automatic queueing: Your request will be accepted and queued until a browser session becomes available
  • Longer run times: The total run time will include both queue wait time and execution time
Example scenario: If your account allows 3 concurrent sessions and you fire 10 requests simultaneously:
  • Requests 1-3 start immediately
  • Requests 4-10 are queued
  • As each request completes, the next queued request begins
  • You won’t get errors, but later requests will take longer to complete
We’re actively working on improving the queueing experience with better visibility into queue position and estimated wait times. This behavior will be enhanced in an upcoming release.

Best Practices

  • Know your limits: Check your plan’s concurrency limit in your dashboard
  • Batch sizing: Size your concurrent batches to match your concurrency limit for optimal performance
  • Progress tracking: Implement timing/logging to monitor which requests are queued vs executing
  • Error handling: Always handle potential timeouts for long-running or queued requests