Chapter 9: HTTP Request Node and API Call Scenarios (Cat Facts, Weather, Web Scraping)¶
Video: Watch this chapter on YouTube (1:48:06)
Overview¶
This chapter explores the HTTP Request node in depth through three practical scenarios: calling a public API without authentication (Cat Facts), making authenticated API calls (OpenWeatherMap), and web scraping with AI extraction (Firecrawl). It also covers handling asynchronous API responses with loops.
Detailed Summary¶
The HTTP Request Node¶
The HTTP Request node is one of n8n's most versatile tools, enabling integration with virtually any API or web service. Key features:
- Supports all HTTP methods (GET, POST, PUT, DELETE, etc.)
- Can import cURL commands directly
- Handles various authentication methods
- Processes JSON, form data, and binary responses
Scenario 1: Public API Without Authentication (Cat Facts)¶
Setup¶
- Add Manual Trigger node (simplest trigger type)
- Add HTTP Request node
- Method: GET
- URL:
https://catfact.ninja/fact - No authentication required
Execution¶
Execute the step to receive a random cat fact:
Example output:
{
"fact": "Two members of the cat family are distinct from all others: the clouded leopard and the cheetah..."
}
This demonstrates the simplest API call—no authentication, no parameters, instant response.
Scenario 2: Authenticated API Call (OpenWeatherMap)¶
Getting the API Endpoint¶
- Go to
openweathermap.org/current - Find the API request format for city-based queries
- Copy the endpoint URL template
Basic Setup¶
- Create new HTTP Request node
- Method: GET
- URL:
https://api.openweathermap.org/data/2.5/weather?q={city}&appid={API_KEY} - Replace
{city}with desired city (e.g., "London") - Replace
{API_KEY}with your OpenWeatherMap API key
Getting an API Key¶
- Sign up at openweathermap.org (free)
- Complete onboarding questionnaire
- Go to My API Keys
- Copy existing key or generate new one
Setting Up Proper Authentication¶
Instead of hardcoding the API key in the URL:
- Go to Authentication section
- Select Generic Credential Type
- Choose Header Auth
- Create new credential:
- Name: "OpenWeatherMap Demo"
- Header Name:
x-api-key - Value: Your API key
- Save
Now remove the API key from the URL—authentication is handled by the credential.
Response Data¶
The API returns comprehensive weather data: - Temperature (current, min, max) - Cloud coverage - Humidity - Wind information - Location data
Scenario 3: Web Scraping with Firecrawl¶
Understanding Firecrawl¶
Firecrawl is an AI-powered web scraping platform that can: - Scrape single pages or entire websites - Extract structured data using AI - Handle JavaScript-rendered content - Return markdown-formatted content
Getting Started with Firecrawl¶
- Sign up at
firecrawl.dev - Get API key from dashboard
- Note: Free tier available for testing
Basic Scrape Setup¶
- Go to Firecrawl API documentation
- Find the Scrape endpoint
- Copy the cURL command
- In n8n, add HTTP Request node
- Click Import cURL
- Paste the entire cURL command
Authentication Configuration¶
- Select Generic Credential Type → Header Auth
- Create credential:
- Name: "Firecrawl CodeCloud Demo"
- Header Name:
Authorization - Value:
Bearer [YOUR_API_KEY] - Toggle off the auto-populated headers section
Configuring the Request¶
In the JSON body, set the URL to scrape:
Response Format¶
Firecrawl returns: - Markdown: Formatted content of the page - Metadata: Page title, description, etc. - Links: URLs found on the page
Scenario 4: Handling Async API Calls (Firecrawl Extract)¶
Some APIs process requests asynchronously, requiring a polling pattern.
Understanding Extract vs Scrape¶
Scrape: Returns raw page content immediately Extract: Uses AI to extract specific structured data (takes time)
The Polling Pattern¶
- POST request initiates the extraction
- API returns a request ID
- Wait for processing
- GET request retrieves results using the request ID
- Check status—if not complete, wait and retry
Building the Workflow¶
Step 1: POST Request (Extract)¶
- Import cURL from Firecrawl Extract documentation
- Configure JSON body with:
- URLs to extract from
- Prompt describing what to extract
- Execute to get request ID
Step 2: Wait Node¶
- Add Wait node
- Set duration: 30 seconds
- Unit: Seconds
Important: Pin the POST node data to avoid repeated API calls during testing.
Step 3: GET Request (Poll)¶
- Import GET cURL from documentation
- Configure URL with dynamic request ID:
- Base URL +
/+{{$node['POST_NODE'].json.id}} - Use same authentication credential
Step 4: If Loop for Status Check¶
- Add If node after GET request
- Condition:
statusequalscompleted - True branch: Continue to output
- False branch: Wait again and retry
Step 5: Loop Configuration¶
- Add another Wait node (30 seconds) on false branch
- Connect back to GET request node
- Creates a loop until extraction completes
Complete Workflow Structure¶
Manual Trigger
↓
POST (Firecrawl Extract)
↓
Wait 30 seconds
↓
GET (Poll for result)
↓
If (status == completed)
├── True → Gmail (send results)
└── False → Wait 30s → Loop back to GET
Output Node¶
Connect the true branch to your desired output: - Gmail for email delivery - Slack for messaging - Google Sheets for storage - Any other action node
Best Practices for HTTP Requests¶
- Pin data during development: Avoid repeated API calls
- Use credential types: Never hardcode API keys
- Handle async responses: Implement polling for long operations
- Set appropriate timeouts: Prevent workflow hangs
- Include error handling: Use If nodes to check response status
Key Takeaways¶
-
HTTP Request is universal: Can connect to virtually any API with proper configuration.
-
Import cURL saves time: Paste cURL commands to auto-configure nodes.
-
Three auth patterns: No auth, header auth, and OAuth cover most APIs.
-
Credential types are essential: Use them for security and reusability.
-
Async APIs need polling: POST to start, GET to retrieve, loop until complete.
-
If loops prevent errors: Check status before proceeding to avoid workflow failures.
-
Pin data saves resources: Essential during development to avoid API costs.
-
Wait nodes control timing: Give external systems time to process.
-
Dynamic variables in URLs: Use expressions to insert request IDs and parameters.
-
Output flexibility: Any node can receive the final data—Gmail, Slack, Sheets, etc.
Conclusion¶
The HTTP Request node unlocks n8n's full potential by enabling connections to any API-based service. From simple public APIs like Cat Facts to complex AI-powered extraction with Firecrawl, the patterns learned in this chapter apply universally. The polling pattern for async APIs is particularly important, as many modern services process requests asynchronously. Understanding how to chain POST requests with wait nodes and status-checking loops is essential for building robust production workflows. These skills prepare learners for the image and video generation workflows in subsequent chapters, which also rely on async API patterns.