Saltar al contenido principal

ML Inference

Category: Ai Ml Standards: HIPAA · FDA (for clinical models)

Run custom ML models for prediction and classification

What this node does

  • Custom models
  • Batch inference
  • Model registry
  • Feature engineering

How to use

  1. In the Agentic Studio, open or create a workflow
  2. In the node palette on the left, find ML Inference under the Ai Ml category (or use the search bar)
  3. Drag the node onto the canvas
  4. Double-click the node to open its configuration dialog
  5. Fill in the required parameters (see Configuration below)
  6. Connect the Input Features input port from an upstream node
  7. Optionally connect the Model ID port if needed
  8. Connect the Prediction and Confidence Score output to the next node downstream

Inputs

PortTypeRequiredDescription
Input FeaturesjsonJSON data object
Model IDtextOptionalPlain text string

Outputs

PortTypeDescription
PredictionjsonJSON data object
Confidence ScorenumberNumeric value

Configuration

Open the configuration dialog by double-clicking the ML Inference node on the canvas.

ParameterWhat to enter
modelIdConfigure modelId in the node settings
endpointConfigure endpoint in the node settings
preprocessingConfigure preprocessing in the node settings
thresholdConfigure threshold in the node settings

When to use this node

  • Readmission risk
  • Sepsis prediction
  • Diagnosis classification

Need help configuring this node?

Go to Settings → Connectors to set up the connection this node depends on, then reference the connector ID in the node configuration dialog.