$50 OFF · First 100 buyers
Blog · Engineering

Using Resend as a lightweight CRM

I spent the last few weeks integrating Resend into IdeaTwister, an idea mutation engine that turns one concept into 50+ ranked variations. Lead capture from a handful of forms, contact properties acting as a lightweight CRM, recovery drips for abandoned checkouts, and an inbound webhook that catches replies and pauses every active drip the moment a human says something back.

The official Resend docs are good, but a few decisions I had to make weren't covered in detail, and one or two gotchas cost me real debugging time. This post is the version of those docs I wish I'd had before I started. It walks through every piece in the order I built them, with the question I was trying to answer at each step and the solution I landed on.

Published 2026-05-07 · 12 minute read

What's in this post

Capturing leads from your website

The first thing I needed was a single backend endpoint that all the lead-capture surfaces on the website could submit to. Lead magnets, exit-intent popups, "notify me" buttons. Each one is a different React component, but they all do the same backend work: validate the email, create a contact in Resend, attach whatever context the page knows about the visitor.

In a Next.js App Router project that's a single file:

// app/api/intake/route.ts
import { NextRequest, NextResponse } from 'next/server'
import disposableDomains from 'disposable-email-domains'

const DISPOSABLE = new Set(disposableDomains)
const EMAIL_RE = /^[^\s@]+@[^\s@]+\.[^\s@]{2,}$/

export async function POST(request: NextRequest) {
  const { email, name, elapsedMs } = await request.json()
  // ... validation, contact create, property update
}

Four things happen on every submission:

  1. Cheap bot check
  2. Email validation
  3. Contact create in Resend
  4. Property update with attribution context

The bot check turned out to be the most interesting one. I started with a hidden honeypot field, the classic approach where a bot-only input gets filled and any submission with it set is rejected. Browser autofill kept tripping it for real users, so I switched to a timing check. The form records when it first opened, the submit handler computes elapsed milliseconds, and the server rejects anything under 1.5 seconds. Real humans take longer than that to type and click. Bots that scrape and POST instantly fail. To them I return a 200 with a generic success message so they think it worked and stop retrying.

if (typeof elapsedMs === 'number' && elapsedMs < 1500) {
  return NextResponse.json(
    { success: true, message: 'Submission received' },
    { status: 200 },
  )
}

For email validation I use a deliberately strict regex and a disposable-domain block list. The disposable-email-domains npm package is well maintained and covers thousands of throwaway providers. The point isn't to validate every edge of RFC 5322, just to reject obvious junk before it reaches Resend.

Splitting contact create from property update turned out to be a pattern worth recommending. The Resend contacts.create call is one shot, but properties get attached over time as you learn more about the contact. I have a single helper called recordContactContext that takes any subset of properties and updates only what's set. The same function gets called from the lead form, the checkout endpoint, the order-completion webhook, and the inbound-reply handler. One write path, many entry points.

One small thing on the URL itself. I'd avoid naming this endpoint after what it does in user-facing terms. /api/waitlist or /api/subscribe tells anyone who opens DevTools exactly what your funnel is doing. Pick something neutral like /api/intake or /api/capture. Descriptive enough for log debugging, generic enough that it doesn't broadcast intent.

So now contacts were arriving in Resend. The next question was where to put everything I knew about them.

Designing contact properties as a lightweight CRM

Resend has a feature called Custom Contact Properties. Each contact in your audience can carry arbitrary structured data (strings, numbers, dates), and both Broadcasts and Automations can read those values for personalization and conditional logic. Used well, you don't need a separate CRM for the first while.

I structure properties into three buckets.

Funnel state

Where in the lifecycle is this contact?

  • funnel_status (string): one of lead, checkout_started, customer, refunded
  • funnel_status_at (ISO timestamp)
  • checkout_started_at (ISO timestamp)

This is the single source of truth for what you should be doing for this person right now. Almost every Automation conditions on it.

Attribution

Where did they come from?

  • first_referrer, first_landing_page, first_visit_at (sticky, set on first visit)
  • first_utm_source, first_utm_medium, etc. (also sticky)
  • last_utm_source, last_utm_medium, etc. (overwrite, set on every UTM-tagged visit)

The first-touch versus last-touch split matters more than I expected. First-touch tells you which channel originally brought them in, useful for marketing budget decisions. Last-touch tells you which specific email or campaign drove their most recent click, useful for attributing checkouts to specific drips.

I capture both client-side in localStorage and ship them with every form submit. First-touch fields are written if-empty so the original value never gets overwritten. Last-touch fields get plain overwrites whenever the URL carries UTM params. Visits without UTMs leave the previous values intact, otherwise a direct visit between two campaigns would erase the original campaign's attribution.

Engagement and lifecycle locks

Bookkeeping for the automations themselves.

  • last_automation (string): name of the currently-running drip workflow, empty when none
  • last_automation_started_at (ISO timestamp)
  • replied_at (ISO timestamp, set the moment a human reply hits the inbound webhook)

That last property is critical. If someone replies to your transactional email, you should never drip-mail them again until you respond. Every Automation gates on replied_at IS_EMPTY as its first condition.

With the property design figured out, I needed to actually create them on the Resend audience. That's where I hit my first real production issue.

Adding new properties without breaking your rate limit

Resend has a hard rate limit of 5 requests per second on the API. Most of your application code never gets near it. But contact properties have to exist on the audience before any contact can carry them, and the natural place to put that creation is right next to where you first use the property. If a property doesn't exist when you try to set it, create it. Right?

Wrong, and the failure mode is brutal. On a serverless cold start, the lazy creation can fire 20 or more create-if-missing calls in the first quarter second. Some succeed. Some get rate-limited and silently fail. Some succeed on retry but in a different order. The end state of your audience drifts depending on which lambda happened to win the race that day.

The fix is to take property creation out of the runtime entirely and turn it into a build-time bootstrap script. Idempotent, so creating a property that already exists is a no-op. Spaced, so 250ms between calls keeps you at 4 req/s, comfortably under the limit. Run it manually when you add new properties, and as part of every deployment.

Here's the script I ended up with, generalized:

// scripts/sync-resend-properties.mjs
//
// Idempotent bootstrap for Resend custom Contact Properties. Run when
// you add or rename a property. Spaces calls at 250ms (4 req/s) to
// stay well under Resend's 5 req/s ceiling.

import { Resend } from 'resend'

const PROPERTIES = [
  { key: 'funnel_status', type: 'string', fallbackValue: '' },
  { key: 'funnel_status_at', type: 'string', fallbackValue: '' },
  { key: 'checkout_started_at', type: 'string', fallbackValue: '' },
  { key: 'first_utm_source', type: 'string', fallbackValue: '' },
  { key: 'first_utm_medium', type: 'string', fallbackValue: '' },
  { key: 'last_utm_source', type: 'string', fallbackValue: '' },
  { key: 'last_utm_medium', type: 'string', fallbackValue: '' },
  { key: 'last_automation', type: 'string', fallbackValue: '' },
  { key: 'replied_at', type: 'string', fallbackValue: '' },
  // ...add yours
]

const sleep = (ms) => new Promise((r) => setTimeout(r, ms))

async function main() {
  const apiKey = process.env.RESEND_API_KEY
  if (!apiKey) {
    console.error('RESEND_API_KEY not set.')
    process.exit(1)
  }

  const resend = new Resend(apiKey)
  let created = 0
  let existed = 0
  let failed = 0

  for (const def of PROPERTIES) {
    const result = await resend.contactProperties.create(def)
    const msg = result.error?.message ?? ''

    if (!result.error) {
      created++
      console.log(`  + ${def.key}`)
    } else if (/already exist|duplicate/i.test(msg)) {
      existed++
      console.log(`  = ${def.key} (already exists)`)
    } else {
      failed++
      console.warn(`  ! ${def.key}: ${msg}`)
    }

    await sleep(250)
  }

  console.log(`\nDone. created=${created} existed=${existed} failed=${failed}`)
  if (failed > 0) process.exit(1)
}

main().catch((err) => {
  console.error('Sync failed:', err)
  process.exit(1)
})

Two small things to flag.

First, the /already exist|duplicate/i check is how you make this idempotent without the SDK giving you a clean "already exists" error code. You parse the message string. Not pretty, but it works.

Second, the 250ms sleep gives you 4 req/s. The 1 req/s of headroom is enough to absorb a single retry on a transient network blip without tripping the limit.

Wiring it into your build

Manual sync is fine, but it's easy to forget. Wire it into your build pipeline:

// package.json
{
  "scripts": {
    "sync:resend-properties": "node --env-file=.env.local scripts/sync-resend-properties.mjs",
    "prebuild": "node scripts/sync-resend-properties.mjs",
    "build": "next build"
  }
}

The prebuild script in npm runs automatically before build, so every deployment ensures your Resend audience schema matches your code. If a teammate adds a new property in the script and you pull their branch, your next build syncs it without any extra step.

The split between prebuild and sync:resend-properties is intentional. Local dev uses the npm script, which loads .env.local via the --env-file flag. Production uses the prebuild hook, which inherits environment variables from your hosting platform (Vercel, Render, Fly, and so on) and doesn't need the file.

Once your properties are in place, the next thing you'll want is something to keep your application code from running into the same rate limit.

A reusable rate-limit guard for the rest of your app

The bootstrap script handles property creation. But the moment you write a cron job that loops over contacts, or a batch operation that fires multiple events, or anything else that talks to Resend in a tight loop, you're back in rate-limit territory.

A small wrapper handles this everywhere:

// lib/resendClient.ts
import { Resend } from 'resend'

const RATE_LIMIT_PER_SEC = 4 // 1 req under the 5/s ceiling
const MIN_INTERVAL_MS = Math.ceil(1000 / RATE_LIMIT_PER_SEC)

let lastCallAt = 0

export async function callResend<T>(fn: () => Promise<T>): Promise<T> {
  const now = Date.now()
  const wait = lastCallAt + MIN_INTERVAL_MS - now
  if (wait > 0) await new Promise((r) => setTimeout(r, wait))
  lastCallAt = Date.now()

  try {
    return await fn()
  } catch (err) {
    const msg = err instanceof Error ? err.message : ''
    if (/429|rate limit/i.test(msg)) {
      await new Promise((r) => setTimeout(r, 1000))
      return await fn()
    }
    throw err
  }
}

export const resend = process.env.RESEND_API_KEY
  ? new Resend(process.env.RESEND_API_KEY)
  : null

Every call site uses await callResend(() => resend.contacts.create(...)) instead of calling the SDK directly. The wrapper is a couple dozen lines and prevents an entire category of "works locally, breaks in production under load" bugs.

In a serverless environment with multiple cold starts in flight, this isn't a perfect rate limiter (each instance has its own counter), but it's enough for the typical SaaS workload where you're talking to Resend a few times per request.

With the boring infrastructure handled, I could finally move on to the part I actually cared about: the lifecycle emails themselves.

Recovering checkouts that never finished

The first lifecycle flow I needed was a recovery drip. When someone starts checkout but doesn't pay, send them a nudge an hour later, another one a day after that, and a final one three days in.

My first instinct, coming from another email tool, was to write a cron job. I did. It ran every 4 hours, listed every contact in the segment, filtered for funnel_status === 'checkout_started', computed the age of checkout_started_at, and decided which step to fire. About 200 lines of code.

It worked. I wasn't happy with it. The 4-hour cron interval meant the +1 hour email might actually go out 5 hours after checkout started. Listing every contact every 4 hours felt wasteful. And the logic for "which step is this contact owed next" was easy to get subtly wrong.

Resend Automations let you replace the cron with an event-driven workflow. Instead of polling for state, your application fires custom events when interesting things happen, and the workflow editor lets you build a tree of conditions, waits, and emails that responds to those events.

The pattern is:

  1. Your application calls resend.events.send({ event: 'checkout.started', email }) at the moment checkout begins.
  2. In the Resend dashboard, you build a workflow triggered by that event. Conditions reference contact properties. Wait-for-Event steps pause the workflow until either a specific event arrives (like checkout.completed) or a timeout expires.
  3. The workflow handles all the timing. Your code just fires events at the right moments.

For the recovery flow, the workflow looks like this:

TRIGGER: checkout.started
├─ Condition: replied_at IS_EMPTY → false → END
├─ Condition: funnel_status == 'checkout_started' → false → END
├─ Send Email: recovery_1h
├─ Wait for Event: checkout.completed (timeout 23h)
│  ├─ event_received → END (cancelled, they paid)
│  └─ timeout →
│     ├─ Send Email: recovery_24h
│     └─ Wait for Event: checkout.completed (timeout 48h)
│        ├─ event_received → END
│        └─ timeout → Send Email: recovery_72h → END

Three things to notice. First, no cron. The timing is precise to within a few seconds. Second, the Wait-for-Event step naturally handles cancellation. If someone pays during the wait window, the workflow exits cleanly without you having to track and skip. Third, conditions reference contact properties directly. As long as your application keeps funnel_status accurate, the workflow logic is declarative.

This worked beautifully for the recovery flow. Then I added a second workflow for new-customer onboarding, and discovered a problem I hadn't anticipated.

Stopping two automations from emailing the same person at once

Imagine someone starts checkout (recovery drip starts), pays 30 minutes later (onboarding drip starts), and is now receiving emails from both flows in parallel. Not a great experience.

Resend's workflow editor doesn't have a built-in concept of "this contact is busy, skip them". Each workflow runs independently. So I had to build that concept myself, using a contact property as a per-contact lock.

The pattern is small but has to be applied consistently to every workflow.

  1. Right after the gate conditions, the workflow writes its own name to a last_automation property.
  2. Before any of that, the very first condition checks that last_automation is either empty or already equal to this workflow's own name. (The "or own name" carve-out lets the workflow re-evaluate during its own run without self-blocking.)
  3. Every terminal node in the workflow (happy path, timeout branch, cancelled-via-event branch) clears last_automation back to empty.

In ASCII:

TRIGGER: <event_name>
├─ Condition: replied_at IS_EMPTY  → false → END
├─ Condition: funnel_status NOT IN [past states]  → false → END
├─ Condition: last_automation IS_EMPTY OR == "<self>"  → false → END
├─ Contact Update: last_automation = "<self>"
│
│  ... workflow steps ...
│
└─ Contact Update: last_automation = ""  →  END

The terminal-node cleanup is the part I keep almost forgetting. If a workflow exits without clearing the lock, that contact silently misses every future automation forever. Audit your terminal nodes carefully before you publish a workflow.

There's one more thing every automation gates on that I mentioned earlier: replied_at. The whole point of that property is that the moment someone replies to one of your emails, every running drip pauses. To make that work, I needed a way to detect inbound replies.

Receiving and forwarding email replies

Resend Inbound is a webhook that fires when an email arrives at an address you've configured. Set up MX records on a subdomain (say mail.example.com), point them at Resend, and the webhook starts firing on every inbound message.

I needed two things from this webhook. First, set replied_at on the contact so all my drip workflows pause. Second, forward the reply to my actual human inbox so I never miss it while I'm wiring up more sophisticated reply handling.

I built the handler, deployed, sent a test reply, and got a forwarded email with an empty body.

Here's the gotcha that cost me an evening. The webhook payload does not include the email body. Not the HTML, not the text, not the headers. Only metadata: email_id, from, to, subject, cc, bcc, attachments (just metadata for those too). The docs mention this in passing, but my eyes had glazed over the line and I assumed the standard webhook pattern of "everything in the payload" applied.

The fix is a separate API call. Take the email_id from the webhook, then call resend.emails.receiving.get(emailId). The response includes the full HTML, text, headers, and an optional raw URL for the original MIME.

const fetched = await client.emails.receiving.get(emailId)
const html = fetched.data?.html ?? ''
const text = fetched.data?.text ?? ''

With the body in hand, forwarding to a human inbox is a regular client.emails.send call:

await client.emails.send({
  from: 'noreply@mail.example.com',  // your Resend-verified domain
  to: 'hello@example.com',           // human inbox (different domain!)
  replyTo: originalSender,           // so reply-from-inbox routes back
  subject: `Fwd: ${originalSubject}`,
  html: `<banner>Forwarded from ${originalSender}</banner>${html}`,
  text: `--- Forwarded from ${originalSender} ---\n\n${text}`,
})

Two details to get right.

First, your forward destination needs to be on a different domain (or subdomain without Resend MX records) than the inbound address. If both are on mail.example.com, the forward arrives back at the inbound webhook and you have an infinite loop. I use mail.example.com for inbound and forward to hello@example.com, where the apex domain has Cloudflare Email Routing handling delivery to a Gmail inbox.

Second, set replyTo to the original sender. When a human in your team hits reply in their inbox, the response goes back to the original person, not to your own forwarding address.

Loop guard, just in case

Even with separate domains, add a defensive check. If anything from your inbound subdomain hits the webhook, drop it:

const INBOUND_DOMAIN = 'mail.example.com'
if (fromEmail.endsWith(`@${INBOUND_DOMAIN}`)) {
  return NextResponse.json({ ok: true, ignored: 'self-loop' }, { status: 200 })
}

Handling unknown senders

When someone replies from an email that isn't in your Resend audience (a different alias, or a customer who never went through your funnel), recordContactContext will throw "Contact not found" and events.send will fail similarly. These are expected, not real errors. Catch them and continue:

try {
  await recordContactContext({
    email: fromEmail,
    repliedAt: new Date().toISOString(),
  })
} catch (err) {
  const msg = err instanceof Error ? err.message : String(err)
  if (!/contact not found/i.test(msg)) {
    console.warn('[Inbound] recordContactContext failed:', err)
  }
}

The forward block runs independently, so unknown senders still get their reply forwarded to your human inbox. The Resend-side tracking is a bonus, not a precondition.

There's one more thing to do before this is production-safe: prove the webhook actually came from Resend.

Verifying webhook signatures (and three things people miss)

Resend webhooks are signed with Svix. Without verification, anyone who knows your webhook URL can fire fake email.received events at it, including events claiming to be replies from your own customers.

The verification logic is short:

import { createHmac, timingSafeEqual } from 'node:crypto'

function verifySvix(rawBody, svixId, svixTimestamp, svixSignature, secret) {
  const tsNum = Number(svixTimestamp)
  if (Math.abs(Date.now() / 1000 - tsNum) > 5 * 60) return false  // replay window

  const base64Secret = secret.startsWith('whsec_') ? secret.slice(6) : secret
  const secretBytes = Buffer.from(base64Secret, 'base64')
  const signedPayload = `${svixId}.${svixTimestamp}.${rawBody}`
  const expected = createHmac('sha256', secretBytes)
    .update(signedPayload)
    .digest('base64')

  for (const versioned of svixSignature.split(' ')) {
    const [, sig] = versioned.split(',')
    if (!sig) continue
    const sigBuf = Buffer.from(sig, 'utf8')
    const expectedBuf = Buffer.from(expected, 'utf8')
    if (sigBuf.length !== expectedBuf.length) continue
    if (timingSafeEqual(sigBuf, expectedBuf)) return true
  }
  return false
}

Three things people consistently miss.

The signed string is id.timestamp.body, with literal periods between, not just the body. Forgetting the prefix means every signature you check fails for the same reason as the first one.

The secret needs the whsec_ prefix stripped before you base64-decode it. The prefix is just a tag in the dashboard, not part of the actual secret bytes.

The comparison must use timingSafeEqual, not ===. String equality reveals the position of the first differing byte through CPU timing, and a determined attacker can use that to forge a valid signature byte by byte.

There's also one related framework gotcha. You need to read the body as raw text before parsing it as JSON. If you let your framework parse the JSON for you, the byte representation might differ from what was signed (whitespace, key ordering), and verification will silently fail.

const rawBody = await request.text()
// verify here, against rawBody
const payload = JSON.parse(rawBody)  // only after verification

What I'd do differently next time

A few takeaways from the whole thing.

Bootstrap properties from day one. Even if you only have two properties, set up the sync script and the prebuild hook. The marginal cost is 20 lines of code and you avoid every "did this property exist in production yet" question forever.

Don't poll if you can subscribe. The recovery cron job worked, but the equivalent Automation is shorter, more precise, and easier to reason about. Default to event-driven from the start.

Treat replied_at as sacred. Set it on every inbound reply and check it as the first condition of every automation. There's no faster way to lose trust than to drip-mail someone who's actively trying to talk to you.

Forward inbound replies to a human inbox. Even if you have grand plans for an LLM-powered reply triage system, get a basic forward in place first. Two API calls, and it saves you from missing the one critical reply you really should have responded to.

Verify webhook signatures before doing anything. Including before logging the payload. A malicious unsigned payload should not even leave a footprint.

The whole setup took maybe a week of focused work to get right. Most of that time was spent learning the gotchas above. Hopefully this saves you some.

About IdeaTwister

The Resend setup above powers the lifecycle emails for IdeaTwister, an AI idea mutation engine that runs locally on your machine. 15 specialised agents apply 15 strategic angles to one raw idea and return 50+ scored variations in about 30 minutes. One-time payment, no subscription.

See pricing - $39 once

Frequently asked questions

Can you use Resend as a CRM?+

Yes, for early-stage SaaS, Resend's Custom Contact Properties give you enough structure to track lifecycle state, attribution, and engagement per contact without bolting on a separate CRM. The pattern that works is to organize properties into three buckets: funnel state (where in the lifecycle the contact is), attribution (where they came from), and engagement locks (bookkeeping for your automations). Both Broadcasts and Automations can read these values for personalization and conditional logic. Once you outgrow it, typically when sales cycles get long enough that a human needs to manage relationships actively, graduate to a real CRM. For the first while, Resend handles it.

How do you avoid hitting Resend rate limits?+

Resend's API caps at 5 requests per second. Two patterns help. First, never create contact properties lazily at runtime. That pattern can fire 20 or more create-if-missing calls in a quarter second on a serverless cold start and corrupt your audience schema. Move property creation to a one-shot bootstrap script (idempotent, spaced at 250ms between calls) and run it as part of your build pipeline via the npm prebuild hook. Second, wrap every SDK call in a small throttling helper that enforces a per-process minimum interval and retries once on 429 errors. Together these prevent the entire category of "works locally, breaks under load" bugs.

Why is the Resend email.received webhook body empty?+

It's by design. The email.received webhook payload only includes metadata: email_id, from, to, subject, cc, bcc, and attachment metadata. The body, headers, and attachment contents are not included. To get them, take the email_id from the webhook and call resend.emails.receiving.get(emailId). The response includes the full HTML, text, headers, and an optional URL to the raw MIME. This is documented in the Resend webhook reference, but it's an easy detail to miss the first time.

How do you stop two Resend Automations from emailing the same person at once?+

Resend Automations don't have built-in cross-workflow deduplication. Each workflow runs independently, so two can happily start drips on the same contact in parallel. The fix is to use a contact property as a per-contact lock. Every workflow's first step writes its own name to a "last_automation" property, every workflow's first condition checks that the lock is empty or matches its own name, and every terminal node clears it back to empty. Forgetting to clear the lock means the contact silently misses every future automation, so audit your terminal nodes carefully before publishing.