Skip to content

Chapter 9: HTTP Request Node and API Call Scenarios (Cat Facts, Weather, Web Scraping)

Video: Watch this chapter on YouTube (1:48:06)

Overview

This chapter explores the HTTP Request node in depth through three practical scenarios: calling a public API without authentication (Cat Facts), making authenticated API calls (OpenWeatherMap), and web scraping with AI extraction (Firecrawl). It also covers handling asynchronous API responses with loops.

Detailed Summary

The HTTP Request Node

The HTTP Request node is one of n8n's most versatile tools, enabling integration with virtually any API or web service. Key features:

  • Supports all HTTP methods (GET, POST, PUT, DELETE, etc.)
  • Can import cURL commands directly
  • Handles various authentication methods
  • Processes JSON, form data, and binary responses

Scenario 1: Public API Without Authentication (Cat Facts)

Setup

  1. Add Manual Trigger node (simplest trigger type)
  2. Add HTTP Request node
  3. Method: GET
  4. URL: https://catfact.ninja/fact
  5. No authentication required

Execution

Execute the step to receive a random cat fact:

Example output:

{
  "fact": "Two members of the cat family are distinct from all others: the clouded leopard and the cheetah..."
}

This demonstrates the simplest API call—no authentication, no parameters, instant response.

Scenario 2: Authenticated API Call (OpenWeatherMap)

Getting the API Endpoint

  1. Go to openweathermap.org/current
  2. Find the API request format for city-based queries
  3. Copy the endpoint URL template

Basic Setup

  1. Create new HTTP Request node
  2. Method: GET
  3. URL: https://api.openweathermap.org/data/2.5/weather?q={city}&appid={API_KEY}
  4. Replace {city} with desired city (e.g., "London")
  5. Replace {API_KEY} with your OpenWeatherMap API key

Getting an API Key

  1. Sign up at openweathermap.org (free)
  2. Complete onboarding questionnaire
  3. Go to My API Keys
  4. Copy existing key or generate new one

Setting Up Proper Authentication

Instead of hardcoding the API key in the URL:

  1. Go to Authentication section
  2. Select Generic Credential Type
  3. Choose Header Auth
  4. Create new credential:
  5. Name: "OpenWeatherMap Demo"
  6. Header Name: x-api-key
  7. Value: Your API key
  8. Save

Now remove the API key from the URL—authentication is handled by the credential.

Response Data

The API returns comprehensive weather data: - Temperature (current, min, max) - Cloud coverage - Humidity - Wind information - Location data

Scenario 3: Web Scraping with Firecrawl

Understanding Firecrawl

Firecrawl is an AI-powered web scraping platform that can: - Scrape single pages or entire websites - Extract structured data using AI - Handle JavaScript-rendered content - Return markdown-formatted content

Getting Started with Firecrawl

  1. Sign up at firecrawl.dev
  2. Get API key from dashboard
  3. Note: Free tier available for testing

Basic Scrape Setup

  1. Go to Firecrawl API documentation
  2. Find the Scrape endpoint
  3. Copy the cURL command
  4. In n8n, add HTTP Request node
  5. Click Import cURL
  6. Paste the entire cURL command

Authentication Configuration

  1. Select Generic Credential TypeHeader Auth
  2. Create credential:
  3. Name: "Firecrawl CodeCloud Demo"
  4. Header Name: Authorization
  5. Value: Bearer [YOUR_API_KEY]
  6. Toggle off the auto-populated headers section

Configuring the Request

In the JSON body, set the URL to scrape:

{
  "url": "https://techcrunch.com/category/artificial-intelligence/"
}

Response Format

Firecrawl returns: - Markdown: Formatted content of the page - Metadata: Page title, description, etc. - Links: URLs found on the page

Scenario 4: Handling Async API Calls (Firecrawl Extract)

Some APIs process requests asynchronously, requiring a polling pattern.

Understanding Extract vs Scrape

Scrape: Returns raw page content immediately Extract: Uses AI to extract specific structured data (takes time)

The Polling Pattern

  1. POST request initiates the extraction
  2. API returns a request ID
  3. Wait for processing
  4. GET request retrieves results using the request ID
  5. Check status—if not complete, wait and retry

Building the Workflow

Step 1: POST Request (Extract)
  1. Import cURL from Firecrawl Extract documentation
  2. Configure JSON body with:
  3. URLs to extract from
  4. Prompt describing what to extract
  5. Execute to get request ID
Step 2: Wait Node
  1. Add Wait node
  2. Set duration: 30 seconds
  3. Unit: Seconds

Important: Pin the POST node data to avoid repeated API calls during testing.

Step 3: GET Request (Poll)
  1. Import GET cURL from documentation
  2. Configure URL with dynamic request ID:
  3. Base URL + / + {{$node['POST_NODE'].json.id}}
  4. Use same authentication credential
Step 4: If Loop for Status Check
  1. Add If node after GET request
  2. Condition: status equals completed
  3. True branch: Continue to output
  4. False branch: Wait again and retry
Step 5: Loop Configuration
  1. Add another Wait node (30 seconds) on false branch
  2. Connect back to GET request node
  3. Creates a loop until extraction completes

Complete Workflow Structure

Manual Trigger
POST (Firecrawl Extract)
Wait 30 seconds
GET (Poll for result)
If (status == completed)
   ├── True → Gmail (send results)
   └── False → Wait 30s → Loop back to GET

Output Node

Connect the true branch to your desired output: - Gmail for email delivery - Slack for messaging - Google Sheets for storage - Any other action node

Best Practices for HTTP Requests

  1. Pin data during development: Avoid repeated API calls
  2. Use credential types: Never hardcode API keys
  3. Handle async responses: Implement polling for long operations
  4. Set appropriate timeouts: Prevent workflow hangs
  5. Include error handling: Use If nodes to check response status

Key Takeaways

  1. HTTP Request is universal: Can connect to virtually any API with proper configuration.

  2. Import cURL saves time: Paste cURL commands to auto-configure nodes.

  3. Three auth patterns: No auth, header auth, and OAuth cover most APIs.

  4. Credential types are essential: Use them for security and reusability.

  5. Async APIs need polling: POST to start, GET to retrieve, loop until complete.

  6. If loops prevent errors: Check status before proceeding to avoid workflow failures.

  7. Pin data saves resources: Essential during development to avoid API costs.

  8. Wait nodes control timing: Give external systems time to process.

  9. Dynamic variables in URLs: Use expressions to insert request IDs and parameters.

  10. Output flexibility: Any node can receive the final data—Gmail, Slack, Sheets, etc.

Conclusion

The HTTP Request node unlocks n8n's full potential by enabling connections to any API-based service. From simple public APIs like Cat Facts to complex AI-powered extraction with Firecrawl, the patterns learned in this chapter apply universally. The polling pattern for async APIs is particularly important, as many modern services process requests asynchronously. Understanding how to chain POST requests with wait nodes and status-checking loops is essential for building robust production workflows. These skills prepare learners for the image and video generation workflows in subsequent chapters, which also rely on async API patterns.