From Git Push to Social Post: Building a Registry-Driven SEO and Automation Pipeline

How a TypeScript registry pattern powers SEO, routing, and fully automated social media publishing for a Nuxt static site, using GitHub Actions, GitHub Secrets, and a self-hosted n8n instance on a VPS.

📅

✍️ Gianluca

From Git Push to Social Post: Building a Registry-Driven SEO and Automation Pipeline

When you run a content platform that spans tools, articles, games, and resource directories, keeping SEO metadata, routing, and social announcements consistent across all of it becomes a real engineering problem. The naive solution is to manage everything manually: update the sitemap by hand, write the Open Graph tags per page, post to social channels after each publish. It works, until it does not. Entries get forgotten, metadata drifts out of sync, and the friction of manual publishing slows everything down.

This article documents the architecture behind CodeHelper, a developer platform built with Nuxt 4 and deployed as a fully static site. The system uses a small set of TypeScript registry files as the single source of truth for every piece of content. A git push to the main branch is enough to trigger a static site build, an FTP deploy, and a fully automated social media announcement pipeline spanning LinkedIn, Discord, and Telegram. No manual steps. No forgotten posts. No metadata inconsistencies.

The Registry Pattern: One File to Rule All Metadata

The foundation of the system is a single TypeScript object that maps a slug key to everything the platform needs to know about a piece of content. For tools, articles, and games, this means the title, description, the lazily imported Vue component, and a full SEO block containing the meta description, Open Graph image, keywords, JSON-LD structured data, and an FAQ schema for Google's rich results.

Every dynamic route reads from exactly one registry file. The page at /tools/json-formatter/ does not have its own metadata hardcoded. It resolves the slug from the URL, looks it up in toolRegistry.ts, and injects the SEO head tags from the registry entry. Adding new content means editing one registry file and creating the Vue component. Nothing else needs to change.

// utils/articlesRegistry.ts — simplified shape
export const articles: Record<string, Article> = {
  'registry-driven-automation': {
    title: 'From Git Push to Social Post',
    description: 'How registry files drive SEO and automation.',
    category: 'tutorials',
    component: () =>
      import('~/components/articles/tutorials/RegistryDrivenAutomation.vue'),
    seo: {
      title: 'Registry-Driven SEO and Automation – CodeHelper',
      description: 'How a TypeScript registry pattern powers SEO, routing, and social automation.',
      keywords: 'nuxt seo automation, github actions, n8n webhook, registry pattern',
      structuredData: {
        '@type': 'TechArticle',
        headline: 'From Git Push to Social Post',
        datePublished: '2026-03-20',
        // ...
      },
      faqSchema: { /* FAQ rich results for Google */ }
    }
  }
}

Why the Registry Is the Contract

The Nuxt configuration reads all registry keys at build time to generate the full list of prerendered routes. The sitemap module picks up the same list automatically. A single composable, usePageSeo(key), reads the registry entry and injects the complete head block in one call: useSeoMeta(), canonical link, breadcrumb JSON-LD, and Organization structured data. Individual page components contain no SEO logic at all.

The Deploy Pipeline: GitHub Actions and Static Site Generation

The site is built and deployed entirely through GitHub Actions. When a push lands on the main branch, a workflow named "Build and Deploy via FTP" triggers automatically. It installs dependencies, runs nuxi generate to produce a fully static output in .output/public/, injects any runtime secrets using GitHub Secrets, and transfers the static build to the production server via FTPS.

# .github/workflows/deploy.yml
name: Build and Deploy via FTP

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: npm

      - name: Install dependencies
        run: npm ci

      - name: Generate static site
        run: npx nuxi generate

      - name: Deploy via FTP
        uses: SamKirkland/FTP-Deploy-Action@v4.3.5
        with:
          server: ${{ secrets.FTP_SERVER }}
          username: ${{ secrets.FTP_USERNAME }}
          password: ${{ secrets.FTP_PASSWORD }}
          local-dir: .output/public/
          server-dir: ${{ secrets.FTP_SERVER_DIR }}
          protocol: ftps

Sensitive values never appear in the repository. They are stored as GitHub Secrets and injected at runtime. The FTP deploy action uses delta sync, transferring only files that changed since the last deployment, which keeps deploy times short as the site grows.

Content Detection: Watching the Registry Diff

The social automation pipeline begins with a second GitHub Actions workflow that runs in parallel with the deploy: "Social Media Notify". This workflow uses a path filter so it only activates when one of the three content registry files changes. If a commit touches only a Vue component or a CSS file, the social workflow does not run.

# .github/workflows/social-notify.yml
on:
  push:
    branches: [main]
    paths:
      - 'utils/toolRegistry.ts'
      - 'utils/articlesRegistry.ts'
      - 'utils/gamesRegistry.ts'
  workflow_dispatch:  # manual trigger for testing and recovery

When the workflow activates, the first job runs a Node.js script called detect-new-content.js. This script uses git diff HEAD~1 HEAD to compare the current commit with the previous one and extracts any new slug entries from the registry files. For each new entry it finds, it builds a structured JSON object with the content type, slug, title, description, and canonical URL.

// Payload sent per new content item
{
  "type": "articles",       // "tools" or "games"
  "slug": "registry-driven-automation",
  "title": "From Git Push to Social Post",
  "description": "How registry files drive SEO and automation.",
  "url": "https://codehelper.me/articles/registry-driven-automation/"
}

The script outputs the detected items as JSON arrays into the GitHub Actions step outputs. The second job reads these arrays and sends one authenticated HTTP POST per item to the n8n webhook. If multiple pieces of content are published in a single commit, each generates its own webhook call, with a one-second delay between calls to avoid rate limits.

# notify-n8n job: send each detected item to n8n
- name: Send to n8n webhook
  run: |
    echo '${{ needs.detect.outputs.new_articles }}' | jq -c '.[]' | while read -r item; do
      curl -sf -X POST "${{ secrets.N8N_WEBHOOK_URL }}" \
        -H "Content-Type: application/json" \
        -H "X-Webhook-Secret: ${{ secrets.N8N_WEBHOOK_SECRET }}" \
        -d "$item"
      sleep 1
    done

Webhook Security

The webhook is protected with a shared secret passed as a custom header (X-Webhook-Secret). The secret is stored both as a GitHub Secret and as an n8n credential. Requests without the correct header are rejected with a 403 response before they reach the workflow logic. The webhook URL itself should also be treated as a secret, stored in GitHub Secrets rather than hardcoded in workflow files.

n8n on a VPS: Self-Hosted Automation on the Same Server

Rather than using a third-party automation platform with usage limits, CodeHelper runs n8n self-hosted inside a Docker container on the same VPS that serves the site. Everything lives in one place: the Nginx reverse proxy, the site files, and the automation engine all share the same machine. Managing the stack means managing one server.

n8n is exposed through a dedicated subdomain (n8n.yourdomain.com) with SSL handled by the same Nginx setup that manages the main site. The n8n container listens on port 5678 internally and is never exposed directly to the internet. All external traffic reaches it through the Nginx reverse proxy, which terminates SSL and forwards requests to the container.

# Run n8n in Docker — bound to localhost only
docker run -d \
  --name n8n \
  --restart unless-stopped \
  -p 127.0.0.1:5678:5678 \
  -e N8N_HOST=n8n.yourdomain.com \
  -e N8N_PORT=5678 \
  -e WEBHOOK_URL=https://n8n.yourdomain.com/ \
  -e N8N_PROXY_HOPS=1 \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

Nginx Reverse Proxy for the n8n Subdomain

server {
    listen 443 ssl;
    server_name n8n.yourdomain.com;

    ssl_certificate     /etc/letsencrypt/live/n8n.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/n8n.yourdomain.com/privkey.pem;

    location / {
        proxy_pass         http://127.0.0.1:5678;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection 'upgrade';
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_cache_bypass $http_upgrade;
    }
}

Binding the Docker port to 127.0.0.1:5678 instead of 0.0.0.0:5678 ensures the container port is only reachable from localhost, not from the public internet. There are no port conflicts with Nginx because Nginx occupies ports 80 and 443 while n8n only listens internally.

The n8n Workflow: Routing by Content Type

Inside n8n, the workflow receives the webhook payload and uses an If node to check whether the environment is set to production or maintenance mode. In production mode, the flow continues to the platform posting nodes. In maintenance mode, it sends a private alert to the admin and stops, which is useful during deploys or outages.

The LinkedIn node creates a post with the article or tool link set as the Original URL field, causing LinkedIn to fetch the Open Graph tags and render a preview card. The Discord node sends a formatted message to the announcements channel. The Telegram nodes post to the public channel and send a private confirmation to the admin. The content type field in the payload drives the formatting through inline ternary expressions, without any additional branching nodes.

// n8n expression: content-type-aware formatting
{{ $json.body.type === 'tools' ? '🛠️' : $json.body.type === 'games' ? '🎮' : '📖' }} New on CodeHelper:

{{ $json.body.title }}
{{ $json.body.description }}

Check it out: {{ $json.body.url }}

{{ $json.body.type === 'tools'
  ? '#DevTools #WebDev #JavaScript #Programming'
  : '#WebDevelopment #JavaScript #Programming #DevTools' }}

The Full Pipeline at a Glance

Developer adds entry to toolRegistry.ts or articlesRegistry.ts
  └── git push to main

        ├── deploy.yml
        │     ├── npm ci
        │     ├── nuxi generate (all routes derived from registry keys)
        │     └── FTP deploy to VPS (.output/public/)
        │
        └── social-notify.yml
              ├── detect-new-content.js
              │     └── git diff → extract new slugs, titles, URLs
              └── notify-n8n job
                    └── curl POST to https://n8n.yourdomain.com/webhook/codehelper
                          Header: X-Webhook-Secret: ***
                          Body: { type, slug, title, description, url }

                                └── n8n workflow
                                      ├── LinkedIn node → post with OG preview
                                      ├── Discord node → #announcements channel
                                      ├── Telegram node → @channel public post
                                      └── Telegram node → private admin confirmation

GitHub Secrets: What Goes Where

The system uses a clean separation between credentials that belong to GitHub and credentials that belong to n8n. GitHub only needs N8N_WEBHOOK_URL and N8N_WEBHOOK_SECRET to call the automation layer, plus the FTP credentials for the deploy. All platform-specific credentials (LinkedIn OAuth tokens, Discord webhook URLs, Telegram bot tokens) are stored inside n8n, which handles OAuth flows and token refresh independently. Adding a new social platform means adding a node in n8n. The GitHub Actions side does not change.

Why This Architecture Scales

The reason this system works well as the platform grows is that every new piece of content follows exactly the same path. There are no special cases. A new tool, a new article, and a new game all go through the same registry entry, the same deploy workflow, and the same social notification pipeline. The developer adding the content does not need to think about SEO tags, sitemap updates, feed entries, or social announcements. All of that is handled by the pipeline as a direct consequence of editing the registry.

The best publishing workflows are the ones where the developer only has to think about the content. Infrastructure, SEO, and distribution should all follow automatically from a single, well-structured source of truth. The registry is that source of truth. Everything else reads from it.

Sources and Further Reading

Official Nuxt documentation on SEO and meta injection: nuxt.com/docs/getting-started/seo-meta. GitHub Actions secrets management: GitHub Docs — Using Secrets in GitHub Actions. n8n self-hosted Docker installation: docs.n8n.io/hosting/installation/docker. Google structured data specification for articles: Google Search Central — Article structured data.

Published March 2026. This is a technical walkthrough based on the actual CodeHelper infrastructure. CodeHelper has no commercial relationship with GitHub, n8n, or any tool mentioned in this article.