RLM Query
Category: Ai Ml Standards: HIPAA (with BAA) · Clinical data processing
Execute Recursive Language Model queries for long-context healthcare data analysis. RLM allows LLMs to programmatically examine, decompose, and recursively call themselves over portions of large inputs.
What this node does
- Long-context processing (500K+ chars)
- Multi-hop reasoning
- Recursive sub-LLM calls
- REPL-based code execution
- A/B testing prompt variants
- Multi-provider support (OpenAI, Anthropic, vLLM)
How to use
- In the Hydra Builder, open or create a workflow
- In the node palette on the left, find RLM Query under the Ai Ml category (or use the search bar)
- Drag the node onto the canvas
- Double-click the node to open its configuration dialog
- Fill in the required parameters (see Configuration below)
- Connect the Query/Question and Context Data (FHIR bundle, clinical notes, HL7, etc.) input port from an upstream node
- Optionally connect the Context Type Description port if needed
- Connect the Final Answer and Structured Result and Execution Trace and Error output to the next node downstream
Inputs
| Port | Type | Required | Description |
|---|---|---|---|
| Query/Question | text | ✓ | Plain text string |
| Context Data (FHIR bundle, clinical notes, HL7, etc.) | any | ✓ | any data |
| Context Type Description | text | Optional | Plain text string |
Outputs
| Port | Type | Description |
|---|---|---|
| Final Answer | text | Plain text string |
| Structured Result | json | JSON data object |
| Execution Trace | json | JSON data object |
| Error | error | Error information if the operation failed |
Configuration
Open the configuration dialog by double-clicking the RLM Query node on the canvas.
| Parameter | What to enter |
|---|---|
provider (openai/anthropic/vllm/local) | Configure provider (openai/anthropic/vllm/local) in the node settings |
prompt_type (baseline/variant_a-f/healthcare) | Configure prompt_type (baseline/variant_a-f/healthcare) in the node settings |
root_model | Configure root_model in the node settings |
sub_model | Configure sub_model in the node settings |
max_turns | Configure max_turns in the node settings |
temperature | Creativity of the output: 0.0 for deterministic, 1.0 for creative (default: 0.3) |
enable_healthcare_helpers | Configure enable_healthcare_helpers in the node settings |
When to use this node
- Multi-hop QA over patient records
- FHIR bundle deep analysis
- Clinical note information extraction
- Cross-document reasoning
- Complex aggregation queries
Need help configuring this node?
Go to Settings → Connectors to set up the connection this node depends on, then reference the connector ID in the node configuration dialog.