No Code Integration Using Power Automate

No-Code Integration: Use Microsoft Power Automate to call any DocuPipe API endpoint and wire up Outlook-to-DocuPipe flows without writing code.

Overview

DocuPipe exposes a full API so you can orchestrate document flows from any programming language. If you live in the Microsoft ecosystem, Power Automate is the fastest way to wire DocuPipe into Outlook, SharePoint, OneDrive, Teams, or Dataverse without writing code. This guide shows how to connect Power Automate to your DocuPipe account, build an end-to-end Outlook-attachment-to-extraction flow, and avoid the gotchas that trip up most first-time integrators.

Before you start, make sure you already have at least one Schema configured. Follow the Document Extraction Quick Start to define the fields Power Automate will extract on each upload.

📘

Power Automate does not have a native DocuPipe connector. You'll drive DocuPipe through Power Automate's generic HTTP action, which can call any REST endpoint. Every DocuPipe endpoint is reachable this way - the patterns in this guide apply equally to /document, /standardize/batch, /workflow/{workflowId}/run, and anything else in the API reference.

Prerequisites

  1. A Power Automate account with access to the HTTP premium action (or an equivalent licensed environment).
  2. A DocuPipe API key. Retrieve it from Settings → General in the DocuPipe dashboard.
  3. A schema ID for the documents you want to extract. You can copy it from the Schemas page in the DocuPipe dashboard.

Connect Power Automate to DocuPipe

Every DocuPipe call uses the same two pieces of connection configuration:

  • Base URL: https://app.docupipe.ai
  • Auth header: X-API-Key: <your DocuPipe API key>
🚧

DocuPipe uses X-API-Key, not Authorization: Bearer. If you copied an HTTP action template from another service, remove any Authorization header and replace it with X-API-Key. The endpoint will respond with 401 if both are sent.

We recommend saving the API key as an environment variable or secure string in Power Automate (Settings → Environment variables) so it isn't hard-coded in every flow.

Example: Outlook attachment → DocuPipe Workflow

This example watches a shared Outlook mailbox and uploads each attachment into a DocuPipe Workflow that parses the file and runs Standardization automatically. The flow has five actions:

  1. Trigger: Outlook 365 → When a new email arrives (V3)
  2. Apply to each attachment in Attachments
  3. Get Attachment (V2) to pull the raw file bytes
  4. Compose action to safely pass the base64 string to the HTTP body (see The Compose-for-base64 pattern below)
  5. HTTP action that POSTs to https://app.docupipe.ai/document with a workflowId
📘

We strongly recommend using a workflow rather than calling POST /document and POST /v3/standardize as separate HTTP actions. A workflow chains parse → standardize (optionally plus classification or splitting) server-side, so Power Automate only makes one API call per document. Fewer moving parts, no need to thread documentId between actions, and no risk of the second action being skipped if the flow errors out mid-way. Create the workflow in the DocuPipe dashboard first - see Workflows Dashboard for the point-and-click builder - then paste its workflowId into the Power Automate body below.

👍

Simpler path: skip Get Attachment (V2) entirely. If you open the trigger's advanced parameters and set Include Attachments: Yes, each attachment's name and contentBytes come through on the loop item directly. Inside Apply to each you can reference them as items('Apply_to_each')?['name'] and items('Apply_to_each')?['contentBytes'] - no separate Get Attachment (V2) action needed. One fewer step in the flow, and everything else in this guide (the Compose pattern, the HTTP body) works the same way - just swap the outputs('Get_Attachment_(V2)')?['body/...'] references for items('Apply_to_each')?['...'].

The Compose-for-base64 pattern

This is the single most common thing that trips people up, so it's worth understanding before you build the flow.

Outlook's Get Attachment (V2) returns contentBytes as a base64-encoded string, which is exactly what DocuPipe expects in document.file.contents. The catch: Power Automate internally types the contentBytes field as binary. How it serializes into your HTTP body depends on how you reference it:

  • Drop the dynamic content inline (@{outputs('Get_Attachment_(V2)')?['body/contentBytes']} inside a JSON string): Power Automate auto-decodes the binary back to raw bytes and writes them as text, so your HTTP body ends up containing the literal PDF bytes (%PDF-1.4...) instead of a base64 string. DocuPipe will reject it with "Could not determine file type from contents".
  • Wrap it in base64(...): Power Automate treats the value as a string (the already-base64 content) and encodes it a second time. Your body now contains JVBERi0x... re-encoded as SlZCRVJp.... DocuPipe decodes once, sees ASCII instead of PDF bytes, and rejects it with the same error.

The clean fix: route contentBytes through a Compose action. Compose outputs don't carry the binary type tag, so the base64 string passes through to your HTTP body untouched.

  1. Add a Compose action between Get Attachment (V2) and the HTTP action. Name it FileContents.
  2. In Inputs, use a bare reference to the base64 field:
@{outputs('Get_Attachment_(V2)')?['body/contentBytes']}
  1. In the HTTP action body, reference the Compose output as @{outputs('FileContents')}.
📘

Wrapping the reference in a string function inside the Compose (e.g., concat('', outputs('Get_Attachment_(V2)')?['body/contentBytes']) or string(outputs('Get_Attachment_(V2)')?['body/contentBytes'])) is an equivalent pattern and works for the same reason - any string function forces Power Automate to coerce the binary-typed field to a plain string, which is all the Compose is doing on its own. Use whichever the designer lets you paste without auto-rewriting.

👍

Quick verification: after a test run, open the HTTP step in Run History. The contents value should start with JVBERi0xLjQ... (single-encoded base64 of a PDF). If you see %PDF-1.4... it's decoded; if you see SlZCRVJp... it's double-encoded. Either way, revisit the Compose step.

Configure the HTTP action

With the Compose step in place, wire up the HTTP action as follows:

  • Method: POST
  • URI: https://app.docupipe.ai/document
  • Headers:
    • Content-Type: application/json
    • X-API-Key: <your DocuPipe API key>
  • Body:
{
  "dataset": "Inbox Invoices",
  "workflowId": "<your workflow ID>",
  "metadata": {
    "branch": "NY",
    "receivedFrom": "@{triggerOutputs()?['body/from']}"
  },
  "document": {
    "file": {
      "filename": "@{outputs('Get_Attachment_(V2)')?['body/name']}",
      "contents": "@{outputs('FileContents')}"
    }
  }
}

A few notes on that body:

  • workflowId is what chains standardization (and optionally classification/splitting) to the upload. Copy it from the Workflows page in the DocuPipe dashboard after you build your parse → standardize or classify → standardize workflow. Once set, no second HTTP action is needed - DocuPipe runs the whole pipeline and fires a webhook when the structured result is ready.
  • dataset is optional but useful for filtering documents in the dashboard later - set it per flow (e.g., "Inbox Invoices", "AP Archive").
  • metadata is a free-form JSON object up to 10 KB. Use it to pass branch, office, vendor-type, or any other routing hint you want available during extraction (see Using metadata during extraction).
  • filename is optional. If omitted, DocuPipe still detects the file type from the bytes, but setting it improves traceability in the dashboard.
🚧

If your schema guidelines reference metadata (e.g., "format dates using metadata.dateFormat"), make sure the workflow's standardize step has Use Metadata enabled when you build it in the dashboard. Without that flag, the metadata you attach on upload is stored on the document but ignored during extraction. This is one of the most common "my guidelines aren't being followed" causes, and it's a one-click fix in the workflow editor.

Advanced: skipping the workflow

If you genuinely need to drive parse and standardize as two separate steps (for example, conditional routing logic that lives in Power Automate rather than in a DocuPipe workflow), omit workflowId from the upload body and chain a second HTTP action against POST /v3/standardize with the returned documentId:

  • Method: POST
  • URI: https://app.docupipe.ai/v3/standardize
  • Headers: same as the upload action (Content-Type: application/json, X-API-Key: <your DocuPipe API key>).
  • Body:
{
  "documentId": "@{body('HTTP')?['documentId']}",
  "schemaId": "<your schema ID>",
  "useMetadata": true
}

This path works but you own the wiring - handle errors between the two actions, thread the documentId correctly, and remember useMetadata: true here since you're calling /v3/standardize directly rather than going through a workflow. /v3/standardize processes one document per call; if you need to standardize many documents against the same schema, loop over them in an Apply to each or (much better) move the logic into a workflow. For almost every real-world use case, a workflow is the better choice.

Using metadata during extraction

A common pattern for multi-office or multi-branch setups is one schema plus per-flow metadata. For example, if each branch has its own Outlook mailbox, each Power Automate flow can hard-code its own metadata block:

"metadata": {
  "branch": "LHR",
  "dateFormat": "DD/MM/YYYY",
  "baseCurrency": "GBP"
}

Then in your schema guidelines you can write things like "return dates using the format in metadata.dateFormat" or "if the invoice currency is missing, assume metadata.baseCurrency". This keeps one schema in play across every branch instead of maintaining near-duplicates.

If the branches need meaningfully different fields (not just formatting), create one workflow per branch (each bound to its own schema) and hard-code the corresponding workflowId in each Power Automate flow. For a deep dive on multi-schema routing via classification inside a single workflow, see Workflow: Split, Classify, Extract.

Capturing extraction output

You have two ways to get extraction results back into Power Automate after standardization finishes:

  1. Webhooks (recommended). Add a When a HTTP request is received trigger in a separate flow, copy the generated URL, register it in DocuPipe's Webhooks Portal (Settings → Go to Webhooks Portal), and subscribe to standardization.processed.success. Power Automate runs the downstream flow as soon as DocuPipe finishes extraction - no polling.
  2. Polling. Periodically call GET /standardization/{standardizationId} until status is completed. This is simpler to set up but wastes runs and introduces latency. Prefer webhooks whenever possible.

Once the result reaches Power Automate, map the structured JSON into any downstream system - SharePoint lists, Dataverse tables, Excel rows, Teams notifications, ERP connectors, or a second HTTP call to your internal API.

Common gotchas

A quick checklist to run through when an HTTP action is misbehaving:

  • "Could not determine file type from contents": your contents value isn't a clean base64 string. Route contentBytes through a Compose action as described above and verify the HTTP Run History shows JVBERi0xLjQ... (for PDFs).
  • 401 Unauthorized: you're using Authorization instead of X-API-Key, or the key has a stray space/newline. Regenerate and paste it directly into the header field.
  • Metadata shows up on the document but the extraction ignores it: the workflow's standardize step doesn't have Use Metadata enabled (toggle it in the workflow editor). If you're calling /v3/standardize directly, you forgot useMetadata: true in the body.
  • Filenames look garbled in the dashboard: you forgot to map body/name into filename. This is cosmetic only; extraction still works.
  • Power Automate designer auto-rewrites your expression: if the designer replaces outputs('FileContents') with something else after you save, open the HTTP action in code view and set the body there directly.

Where to go next

This example covers the most common Power Automate → DocuPipe flow, but the HTTP action gives you access to every DocuPipe endpoint. Extend the blueprint by swapping the Outlook trigger for SharePoint, OneDrive, Teams, or Dataverse, and route the extracted JSON back into your Microsoft stack once the Schema has been applied. For an alternative visual-builder approach using the dedicated DocuPipe app in Make, see No Code Integration Using Make.com.