API Docs / N8N Integration

N8N Integration

Step-by-step guide to using the Tuberalytics API with N8N's HTTP Request node. Works with both N8N Cloud and self-hosted instances.

Prerequisites

  • A Tuberalytics account with an API key (generate one at API Keys)
  • An N8N instance (cloud or self-hosted)

Creating Reusable Credentials

Set up Header Auth credentials so you don't have to configure authentication on every node.

  1. Go to Settings > Credentials in N8N
  2. Click Add Credential and search for Header Auth
  3. Configure:
    • Name: Tuberalytics API
    • Header Name: Authorization
    • Header Value: Bearer sk_live_your_api_key
  4. Click Save

Now select this credential in any HTTP Request node's Authentication dropdown.


Example 1: Search for Channels

Search for YouTube channels by keyword and process the results.

Step 1: Add an HTTP Request Node

  1. Add an HTTP Request node to your workflow
  2. Configure it:
    • Method: GET
    • URL: https://tuberalytics.com/api/v1/channels/search
    • Authentication: Header Auth > Tuberalytics API
    • Query Parameters:
      • q = ai automation
      • per_page = 25
  3. Click Test step to execute

Step 2: Access the Response

N8N automatically parses the JSON response. Access fields using expressions:

  • {{ $json.data }} — array of channel objects
  • {{ $json.data[0].title }} — first channel's name
  • {{ $json.meta.total_count }} — total matching channels

Example Response

{
  "data": [
    {
      "id": 170,
      "title": "Nick Saraev",
      "youtube_id": "UCxxxx",
      "subscriber_count": 250000,
      "video_count": 180
    }
  ],
  "meta": {
    "current_page": 1,
    "per_page": 25,
    "total_pages": 3,
    "total_count": 62
  }
}

Example 2: Trigger Analysis and Wait for Completion

Run AI analysis on a channel. Since analysis is async, you need to wait for the result.

The channel ID is passed in the URL path (e.g., /channels/170/analyze). Use the id field from a previous search result or from POST /api/v1/me/channels response. No request body is needed.

Option A: Fire and Forget with Webhook

  1. HTTP RequestPOST https://tuberalytics.com/api/v1/channels/{{ $json.data[0].id }}/analyze

    • Authentication: Header Auth > Tuberalytics API
    • No body required
    • Replace {{ $json.data[0].id }} with an expression referencing the channel ID from a previous node
  2. Create a separate workflow triggered by a Webhook node:

    • Set up a Webhook endpoint in Tuberalytics pointing to your N8N webhook URL
    • Subscribe to channel.analysis.completed events
    • Process the analysis result when the webhook fires

Option B: Polling with Wait Node

  1. HTTP RequestPOST https://tuberalytics.com/api/v1/channels/{{ $json.data[0].id }}/analyze
  2. Wait — Pause for 60 seconds
  3. HTTP RequestGET https://tuberalytics.com/api/v1/channels/{{ $json.data[0].id }}/profile
  4. IF — Check if {{ $json.data }} exists. If not, loop back to the Wait node.

POST Response (HTTP 202)

{
  "data": {
    "status": "pending",
    "analysis_id": 456,
    "message": "Channel analysis queued."
  }
}

Recommended: Use the webhook approach. It's more reliable and doesn't waste workflow executions on polling.


Example 3: Niche Research to Slack Summary

Build a workflow that researches a niche and sends a summary to Slack.

Workflow Nodes

  1. Schedule Trigger — Run every Monday at 9am
  2. HTTP Request — Search niches: GET /api/v1/niches/search?q=ai+automation
  3. HTTP Request — Get niche channels: GET /api/v1/niches/{{ $json.data[0].id }}/channels?per_page=10
  4. HTTP Request — Get leaderboard: GET /api/v1/leaderboards/videos?niche_id={{ $json.data[0].id }}&period=week
  5. Code — Format the data into a Slack message
  6. Slack — Send message to a channel

Code Node (Format Message)

const channels = $input.all()[0].json.data;
const videos = $input.all()[1].json.data;

let message = "*Weekly Niche Report: AI Automation*\n\n";
message += "*Top Channels:*\n";
channels.slice(0, 5).forEach((ch, i) => {
  message += `${i + 1}. ${ch.title}${ch.subscriber_count.toLocaleString()} subs\n`;
});

message += "\n*Top Videos This Week:*\n";
videos.slice(0, 5).forEach((v, i) => {
  message += `${i + 1}. ${v.title}${v.view_count.toLocaleString()} views\n`;
});

return [{ json: { message } }];

Working with Pagination

Process all pages of results using N8N's Loop Over Items node.

Pattern: Pagination Loop

  1. Set — Initialize page = 1 and hasMore = true
  2. Loop Over Items — Loop while hasMore is true
  3. HTTP RequestGET /api/v1/channels/search?q=ai+automation&page={{ $json.page }}&per_page=100
  4. Your processing nodes — Process data[] items
  5. IF — Check {{ $json.meta.current_page < $json.meta.total_pages }}
    • True: Set page = {{ $json.page + 1 }}, continue loop
    • False: Set hasMore = false, exit loop

Tip: Use per_page=100 (the maximum) to reduce API calls. Check Rate Limits for your tier's limits.


Error Handling

Retry on Failure

Configure automatic retries on each HTTP Request node:

  1. Open the node's Settings tab
  2. Enable Retry On Fail
  3. Set:
    • Max Retries: 3
    • Wait Between Retries: 60000 ms (60 seconds — allows rate limits to reset)

Error Workflow

Create a dedicated error workflow for alerting:

  1. Go to Settings > Error Workflow in your workflow settings
  2. Select or create an error-handling workflow
  3. In the error workflow, add a Slack or Email node to notify you of failures

Common Error Responses

Status Meaning Action
401 Invalid API key Verify your Header Auth credential
404 Resource not found Check that the channel/niche ID exists
422 Validation error Read the error field for details
429 Rate limit exceeded Retry after 60 seconds (auto-retry handles this)
500 Server error Retry after a short delay

Tips and Best Practices

  • Use the Schedule Trigger node for recurring research. Weekly or daily runs are ideal for niche monitoring without burning through API quota.
  • Use expressions for dynamic URLs. Reference data from previous nodes: https://tuberalytics.com/api/v1/channels/{{ $json.data[0].id }}/videos
  • Split large batches. When processing many channels, use the Split In Batches node to process 10 at a time. This prevents rate limit errors and keeps workflows manageable.
  • Cache with a database. Connect a Postgres, MySQL, or Google Sheets node to store results. Only fetch data that's changed since your last run by comparing timestamps.
  • Pin test data. Use N8N's pin data feature to save example API responses while building your workflow. This avoids making real API calls during development.
  • Use sub-workflows for reusable API patterns. Create a sub-workflow for "get channel with profile" that you can call from any parent workflow.