Last updated: April 21, 2026
An AI task executes a large language model (LLM) prompt during playbook runtime and returns the response for downstream processing. This powers dynamic data structuring, classification, parameter generation, decision-making, and summarization within playbooks.
Configuring an AI Task
-
Select an AI provider from the dropdown.
-
Define system-level instructions to enforce consistent rules, tone, and constraints.
-
Define user prompts to capture and output the target information.
-
Select a response format (text or JSON object).
-
(Optional) Configure advanced settings.
-
Temperature: Lower values produce more deterministic outputs. Higher values increase creativity.
-
Max Output Tokens: Limits the length of the generated response.
-
Timeout: Limits the time allowed for the model to return a response.
-
AI Providers
The Morpheus Built-in AI option requires no connection setup. External LLM providers are supported and require a configured connection with valid API credentials, along with API key and usage management. Select "Other" in the dropdown to configure a custom external provider.
Scenario Walkthroughs
Best Practices
-
Use the JSON Object response format when downstream tasks depend on structured output.
-
Set temperature to 0.0 for decision-making tasks
-
Keep inputs concise and focused
-
Place a conditional task after the AI task before executing actions
-
Define explicit system instructions
-
Design for failure handling
-
Use D3 Morpheus when available
-
Test on a cloned playbook before deployment
-
Apply conservative automation thresholds initially
-
Monitor performance and iterate accordingly