Scrapeless MCP
Scrapeless MCP Server enables real-time web interaction and data extraction for AI models. It integrates browser automation, Google services, and dynamic scraping without getting blocked.
How to Install and Use Scrapeless MCP
Scrapeless MCP is a powerful tool that helps AI models like ChatGPT and Claude connect to the web and interact with websites in real-time. It allows you to search Google, scrape web pages, take screenshots, and even automate browser actions. Here’s a simple guide to installing and using Scrapeless MCP with practical examples.
Step 1: Get Your Scrapeless API Key
Before you start, you need an API key from Scrapeless to use their MCP server. Here’s how:
- Log in to the Scrapeless Dashboard at app.scrapeless.com. A free trial is available if you don’t have an account yet.
- Go to "Setting" on the left menu.
- Select "API Key Management".
- Click "Create API Key".
- Copy your API key by clicking on the one you created.
This key will let you connect Scrapeless MCP with your applications.
Step 2: Set Up Scrapeless MCP Client
Scrapeless MCP can run locally on your computer or as a hosted API. Choose the one that fits your needs.
Local Installation (using Stdio mode):
This method runs Scrapeless MCP right on your machine using Node.js.
Create a configuration file or add this to your MCP client setup:
{
"mcpServers": {
"Scrapeless MCP Server": {
"command": "npx",
"args": ["-y", "scrapeless-mcp-server"],
"env": {
"SCRAPELESS_KEY": "YOUR_SCRAPELESS_KEY"
}
}
}
}
Replace "YOUR_SCRAPELESS_KEY" with the API key you copied before.
Hosted API (Streamable HTTP mode):
If you prefer using the Scrapeless cloud, set up like this:
{
"mcpServers": {
"Scrapeless MCP Server": {
"type": "streamable-http",
"url": "https://api.scrapeless.com/mcp",
"headers": {
"x-api-token": "YOUR_SCRAPELESS_KEY"
},
"disabled": false,
"alwaysAllow": []
}
}
}
Again, replace "YOUR_SCRAPELESS_KEY" with your actual API key.
Step 3: Try Basic Usage Examples
Once your client is configured, you can ask your AI or tool to use Scrapeless MCP to interact with webpages.
Here are some practical examples:
-
Search Google for “scrapeless”:
Your AI can send this command to Scrapeless MCP to get search results.
-
Extract content from a Cloudflare-protected page:
Scrapeless MCP can automatically bypass Cloudflare and return the page content in Markdown.
-
Scrape JavaScript-rendered pages and save to a file:
The server captures dynamically loaded content and exports it in Markdown, which you can then save as
text.md. -
Automate Google Search Results scraping:
Get the top 10 results for “web scraping” with titles, links, and summaries saved to a file named
serp.text.
Step 4: Customize Browser Sessions (Optional)
If you want more control over how the browser sessions behave, use these environment variables (for local Stdio mode) or HTTP headers (for hosted mode):
| Env Var | HTTP Header | Description |
|---|---|---|
| BROWSER_PROFILE_ID | x-browser-profile-id | Reuse a browser profile for session continuity. |
| BROWSER_PROFILE_PERSIST | x-browser-profile-persist | Enable persistent cookie and storage saving. |
| BROWSER_SESSION_TTL | x-browser-session-ttl | Set the maximum session timeout in seconds. |
Step 5: Integration with Other Tools
You can connect Scrapeless MCP to other AI apps easily:
-
Claude Desktop:
Go to
Settings→Tools→MCP Servers, add the configuration from step 2, and enable it. Claude can then start querying the web. -
Cursor IDE:
Press
Cmd + Shift + P, search forConfigure MCP Servers, add your Scrapeless MCP config, save and restart. Now you can ask Cursor to search websites or scrape HTML through Scrapeless.
With these steps, you can install and start using Scrapeless MCP to empower your AI tools with real-time web access, scraping, and automation. Just remember to replace placeholders with your actual API key and follow the example codes for smooth setup.