Key Takeaways
- 1OpenAI function calling forces the model to return structured JSON instead of free-form text, making it safe to feed directly into downstream API calls without string parsing.
- 2Zendesk webhooks push ticket data to Make.com the moment a ticket is created, keeping routing latency under two seconds.
- 3A single PATCH to /api/v2/tickets/{id} can simultaneously set priority, group_id, and tags — one HTTP module handles the entire write-back.
- 4Prompt engineering for triage should include explicit fallback instructions: when confidence is low, default to a human-review tag rather than silently mis-routing.
- 5Spam, non-English tickets, and ambiguous cases need dedicated enum values so the model always returns a deterministic, actionable output.
What You Will Build
This guide covers the Zendesk-native workflow using OpenAI function calling — distinct from the Make + Claude approach covered in a separate post. Every routing decision goes through OpenAI's Chat Completions API, and every write-back goes directly to the Zendesk REST API. Data flow: Zendesk webhook → Make.com webhook trigger → HTTP module to OpenAI (function calling) → parse tool_calls response → HTTP module to Zendesk PATCH. Five nodes, no code servers required.
Steps 1-2: Zendesk Webhook Setup
In Make.com, create a Custom webhook trigger — copy the generated URL. In Zendesk Admin Center, go to Apps and Integrations > Webhooks > Create webhook. Set the endpoint to your Make URL, POST, JSON, no auth. Create a Trigger: condition Ticket is Created, action Notify active webhook with this body:
Step 3: The OpenAI Function Definition
Function calling forces the model to populate a schema rather than returning prose. spam and non_english are first-class enum values — not afterthoughts:
Step 4: OpenAI HTTP Module in Make
Step 5: Parse the Function Call Response
The triage data lives in choices[0].message.tool_calls[0].function.arguments as a JSON string. Add JSON > Parse JSON in Make and map: {{2.choices[].message.tool_calls[].function.arguments}}. After parsing, you have individual fields: department, priority, tags, confidence, summary.
Step 6: Write Back to Zendesk via PATCH
URL: https://YOUR_SUBDOMAIN.zendesk.com/api/v2/tickets/{{1.ticket_id}}.json — Method: PATCH — Auth: Basic base64(email/token:API_TOKEN)
Edge Cases and Error Handling
Spam: route to spam group, optionally add a second PATCH to set status to solved. Non-English: route to multilingual group, extend schema with detected_language (ISO 639-1) for more granular routing. Low confidence: when confidence < 0.7, the prompt routes to needs_human_review. If this exceeds 10% of volume, your enum options or prompt need refinement. OpenAI errors: wrap in Make error handler — on error, default to needs_human_review so no ticket goes unrouted.
Cost and Latency
A typical ticket (subject + 200-word body) consumes ~300-500 input tokens and returns ~80-120 output tokens with gpt-4o — approximately $0.002-$0.004 per ticket. At 500 tickets/day the monthly OpenAI cost is under $70. End-to-end latency from ticket creation to Zendesk update is typically 1.5-3 seconds.
Preguntas Frecuentes
Can I use this workflow with platforms other than Zendesk?
Yes. While this guide uses Zendesk, the Make.com webhook and OpenAI parsing logic work identically for HubSpot, Intercom, or any CRM that supports outbound webhooks.
Will OpenAI hallucinate departments that don't exist?
Not if you enforce a strict JSON schema. If you define an exact list of allowed values in the prompt and enable JSON mode, the output is 100% predictable.
Kyto
AI & Automation Firm
We design and build AI automations and business operating systems. Agency results + Academy sovereignty.

