ML Inference
Category: Ai Ml Standards: HIPAA · FDA (for clinical models)
Run custom ML models for prediction and classification
What this node does
- Custom models
- Batch inference
- Model registry
- Feature engineering
How to use
- In the Agentic Studio, open or create a workflow
- In the node palette on the left, find ML Inference under the Ai Ml category (or use the search bar)
- Drag the node onto the canvas
- Double-click the node to open its configuration dialog
- Fill in the required parameters (see Configuration below)
- Connect the Input Features input port from an upstream node
- Optionally connect the Model ID port if needed
- Connect the Prediction and Confidence Score output to the next node downstream
Inputs
| Port | Type | Required | Description |
|---|---|---|---|
| Input Features | json | ✓ | JSON data object |
| Model ID | text | Optional | Plain text string |
Outputs
| Port | Type | Description |
|---|---|---|
| Prediction | json | JSON data object |
| Confidence Score | number | Numeric value |
Configuration
Open the configuration dialog by double-clicking the ML Inference node on the canvas.
| Parameter | What to enter |
|---|---|
modelId | Configure modelId in the node settings |
endpoint | Configure endpoint in the node settings |
preprocessing | Configure preprocessing in the node settings |
threshold | Configure threshold in the node settings |
When to use this node
- Readmission risk
- Sepsis prediction
- Diagnosis classification
Need help configuring this node?
Go to Settings → Connectors to set up the connection this node depends on, then reference the connector ID in the node configuration dialog.