AI Tasks

Last updated: April 21, 2026

An AI task executes a large language model (LLM) prompt during playbook runtime and returns the response for downstream processing. This powers dynamic data structuring, classification, parameter generation, decision-making, and summarization within playbooks.

AI Task.png

Configuring an AI Task

  1. Select an AI provider from the dropdown.

  2. Define system-level instructions to enforce consistent rules, tone, and constraints.

  3. Define user prompts to capture and output the target information.

  4. Select a response format (text or JSON object).

  5. (Optional) Configure advanced settings.

    • Temperature: Lower values produce more deterministic outputs. Higher values increase creativity.

    • Max Output Tokens: Limits the length of the generated response.

    • Timeout: Limits the time allowed for the model to return a response.

Frame 1 (18).png

AI Providers

The Morpheus Built-in AI option requires no connection setup. External LLM providers are supported and require a configured connection with valid API credentials, along with API key and usage management. Select "Other" in the dropdown to configure a custom external provider.

Examples - Supported AI Providers and Models

AI providers

Provider

Example Models

OpenAI

gpt-5, gpt-4o, gpt-4.1, gpt-4.1-mini

Anthropic

claude-opus-4-6, claude-sonnet-4-5, claude-3-haiku

Google

gemini-2.0-flash

Microsoft (Azure OpenAI models)

gpt-4 , gpt-4o

xAI

xai/grok-4-1-fast-reasoning, xai/grok-4-1-fast-non-reasoning, xai/grok-4

Other

See providers

Scenario Walkthroughs

Scenario 1 - Automated Phishing Triage

Objective Automatically classify phishing incidents as Malicious, Suspicious, or Clean using email headers, URLs, and attachment hashes.

The playbook workflow:

  1. Retrieve email headers, URLs, and attachment hashes.

  2. Run the AI task to classify the phishing incident (Malicious, Suspicious, or Clean).

  3. Determine the next action by using a conditional task based on the classification.

  4. Execute the corresponding action to block, escalate, or close the incident.

Scenario 2 - Extracting Structured Data from Unstructured Alerts

Objective Extract query parameters from alert descriptions for use in Microsoft Sentinel searches.

The playbook workflow:

  1. Retrieve free-form alert descriptions and relevant context.

  2. Run the AI task to extract key parameters (IPs, usernames, timestamps, or indicators).

  3. Validate extracted values by using a conditional task.

  4. Execute a Microsoft Sentinel search using the extracted parameters.

Best Practices

  • Use the JSON Object response format when downstream tasks depend on structured output.

  • Set temperature to 0.0 for decision-making tasks

  • Keep inputs concise and focused

  • Place a conditional task after the AI task before executing actions

  • Define explicit system instructions

  • Design for failure handling

  • Use D3 Morpheus when available

  • Test on a cloned playbook before deployment

  • Apply conservative automation thresholds initially

  • Monitor performance and iterate accordingly

FAQs

How does the AI Task handle errors and retries?

The AI task automatically retries execution up to two times when transient errors occur.

What does this error mean?

Error types include:

  • Connection error: The LLM service could not be reached.

  • Timeout: The LLM did not respond within the allowed time.

  • Invalid response: The output does not match the expected format.

  • Empty response: The LLM returned no content.