Skip to main content

Set up Houdini for cloud rendering


cover
cover

Houdini on our farm is a multi-renderer environment built around the modern USD-first pipeline. Karma XPU is the SideFX-recommended path for new projects on Houdini 20.5 and is the primary CTA on our landing page. Karma CPU and Mantra remain available for legacy work; third-party renderers — Redshift, Arnold, V-Ray for Houdini, Octane — are supported for studios with established pipelines in those engines. This page covers project packaging (which in Houdini means HDAs and simulation caches, not just textures), the Solaris/USD asset workflow, per-renderer notes, the submission flow, and the Houdini-specific troubleshooting we see most often in support tickets.

A note on licensing before we start: we operate Houdini installations on the farm under render-only utilization, which permits running Houdini on render workers for offline rendering of customer projects. Super Renders Farm is not a SideFX partner — render-only utilization is the legal framework that allows farm rendering of Houdini scenes without occupying a SideFX seat from your studio's license pool. You do not transfer your Houdini license to us, and the project tier metadata on the artist side (Indie, Core, FX) does not determine the worker's license arrangement. The worker draws from the farm's licensing on its end.

For high-level positioning — supported Houdini versions, hardware fit, pricing examples — the dedicated landing page at is the canonical reference. The page you are reading is the workflow doc: what to do in Houdini before clicking Submit.

Supported versions

Houdini 19.5, 20.0, and 20.5 are pre-installed on every worker on the farm. We track SideFX's release schedule and provision new major builds within four weeks of public availability. Point releases (e.g., Houdini 20.5.410 vs. 20.5.487) are tracked continuously; if your project is locked to a specific build number due to a node compatibility issue, mention the build in the job notes at submission and we match the worker to it.

Both Houdini FX (the production tier) and Houdini Indie scene files (.hip and .hipnc) load on the worker and render under the farm's render-only utilization arrangement. The artist-side tier (Indie vs. Core/FX) does not propagate to the worker's license seat — the worker uses whichever tier the farm provisions for rendering. Houdini Apprentice files render but produce watermarked output per SideFX's non-commercial license terms; for paid production work, save the scene from a non-Apprentice license before submission. Education licenses follow the same rule.

A note on Houdini's release rhythm: SideFX ships major versions every 12–18 months and point updates more frequently. Karma XPU in particular has improved substantially between 19.5, 20.0, and 20.5 — features that were CPU-fallback in 19.5 (heavy volumes, certain shader networks) are XPU-native in 20.5. If your project depends on a Karma XPU feature that shipped in a specific build, lock the build in the job notes rather than letting the worker pick the latest available.

Packaging your Houdini project

A Houdini project is more than the .hip (or .hipnc) scene file. It typically also includes: HDAs (Houdini Digital Assets — .hda or the older .otl format), simulation cache files (.bgeo, .bgeo.sc, .vdb, .abc), USD asset layers and references (.usd, .usda, .usdc), texture maps, and any external geometry imported from File or Alembic SOPs. Cloud rendering succeeds when every dependency the scene references is present on the worker; it fails when something resolves locally via a workstation-only path but has nowhere to resolve on the farm.

Houdini's path conventions are built on environment variables — most commonly $HIP (resolves to the directory containing the .hip file), $HIPNAME (the scene file basename), and $JOB (the project root, set via environment variable). For cloud rendering, the reliable convention is $HIP-relative paths everywhere. The packaging steps below apply that convention end-to-end:

  1. Set the project folder. When you save a new project, Houdini sets $HIP to the directory containing the .hip file. Verify in the Python shell with hou.text.expandString('$HIP') — the path should match where your scene file lives.
  2. Use the standard subfolder structure. $HIP/cache/ for simulations, $HIP/geo/ for Alembic and external geometry, $HIP/tex/ for textures, $HIP/hda/ for digital assets, $HIP/usd/ for USD layers and references, $HIP/render/ for output. All paths in your scene's File SOPs, File COPs, ROP outputs, and texture references should use $HIP/... rather than absolute drive-letter paths.
  3. Verify paths resolve. File → Refresh All. Houdini reports any unresolved file references in the Console. From the Python shell, hou.fileReferences() returns the full list of external references — scan for any that start with D:\, Y:\, a UNC share like \\server\, or any path the worker cannot reach.
  4. Bake simulations to disk. The farm does not run simulations as part of the render job — simulations are workstation work, and the farm renders against pre-baked cache files. Bake all DOP networks (FLIP fluids, Pyro, Vellum cloth, RBD Bullet, grains), particle solvers, and any other simulation outputs to .bgeo.sc or .vdb files in $HIP/cache/ before submission. The File Cache SOP with "Save to Disk" is the standard workflow.
  5. Embed or include HDAs. If your scene uses custom HDAs from a studio library, either embed them in the .hip (Asset menu → Save Operator Type → "Embedded") or include the .hda/.otl files in $HIP/hda/ so the worker can load them from the project folder. Studio-shared HDA libraries on network drives are not reachable from the worker.
  6. Flatten or bundle USD layers. If your scene uses Solaris/LOPs, either bake the USD stage to a single composed USD file via the USD ROP before submitting, or include the entire $HIP/usd/ directory tree so every layer resolves on the worker. USD asset resolution rules are covered in detail in the next section.
  7. Archive the entire $HIP folder. Use .tar, .tar.gz, or .7z. We do not accept .zip uploads for Houdini projects (Houdini's filename conventions sometimes contain characters that break inside Windows .zip archives on Linux workers).

A common Houdini-specific pitfall: the "Pre-Render Script" and "Post-Render Script" fields on ROP nodes sometimes reference workstation-specific Python scripts — your studio's pipeline tools, a local Houdini config path, a hou.ui.displayMessage call that opens a dialog the worker has no display for. The cloud render either fails silently or hangs waiting for input that will never arrive. Audit any pre-render Python or HScript callback before submission; disable or path-portable any code that touches local-only paths, UI calls, or hou.system() shell-outs to workstation binaries. Prefer print() logging over interactive callbacks.

Solaris, USD layers, and asset resolution

If your scene is authored in Solaris (the /stage LOPs network), the USD asset resolution layer adds an extra dimension to cloud submission that is not present in OBJ/SOP-only scenes. Houdini's USD resolver follows the standard USD asset resolution rules: references in a layer are resolved against the layer's identifier path, search paths configured via houdini.env or the asset resolver plugin, and any composition arcs the stage uses (references, sublayers, payloads).

For cloud submission, two patterns work reliably:

  • Flatten the stage. Use the USD ROP node with "Save to Disk" and the "Flatten Stage" option enabled. The result is a single composed .usd (or .usdc for binary) file that contains the entire stage with all references resolved. This is the simplest pattern — the worker reads one file, no resolver indirection — but you lose the layered structure that makes USD valuable for collaboration.
  • Bundle the full asset tree. Place all USD layers under $HIP/usd/ and use $HIP-relative references in your sublayers, references, and payloads. The worker resolves $HIP to the upload root, so layer files in the same relative position load correctly.

A subtlety: Solaris's "Asset Reference" LOP and the Reference SOP in /obj contexts both serialize the reference path as written. If you wrote D:\studio_assets\char_robot.usd into a Reference LOP, the worker has no D:\ and the reference fails. Re-author the reference as $HIP/usd/char_robot.usd (or ${SRF_ASSETS}/char_robot.usd with a documented environment variable mapping that the farm honors). The simpler the path, the more reliably it travels.

A second subtlety: USD asset libraries can carry their own versioning indirection. A Solaris stage that references USD assets compiled against USD 23.x may not load cleanly on a worker with USD 22.x bundled in an older Houdini build. The Houdini-and-USD version matrix matters — if your asset library was authored for Houdini 20.5's USD version, render on Houdini 20.5 workers.

HDA management

Houdini Digital Assets (HDAs, sometimes still seen with the older .otl extension) are reusable node networks packaged as standalone asset files. They are common in production pipelines, particularly for procedural assets — buildings, vegetation, crowd systems, custom solvers — that are authored separately from individual shots and shared across scenes.

Three patterns for HDA handling on the farm:

  • Embed the HDA in the .hip file. Asset menu → Operator Type Manager → right-click the HDA → "Save to Embedded." The HDA is now stored inside the .hip and travels with the scene. This is the safest pattern for one-off jobs or for HDAs that you author specifically for a single shot.
  • Bundle HDAs in $HIP/hda/. Place all .hda/.otl files in a subfolder of your project, then in Houdini → Edit → Preferences → File Locations, ensure $HIP/hda/ is part of the OTL search path (alternatively, set HOUDINI_OTLSCAN_PATH to include $HIP/hda/ in your houdini.env). The worker reads HDAs from this location when loading the scene.
  • Reference HDAs from a studio shared library. If your studio uses a shared HDA library on a network drive (e.g., \\studio-fs\houdini\hda\), that library is not accessible from the worker. Either copy the relevant HDAs into $HIP/hda/ before submission, or embed them in the .hip.

Before submission, list the loaded HDAs in the scene from the Python shell:

text
for hda in hou.hda.loadedFiles():
    print(hda)

Every path in the output must either resolve under $HIP or be a stock SideFX HDA shipped with Houdini itself (those are pre-installed on every worker). Any third-party HDA that lives outside $HIP will not be found.

Cache file management

Cache files are typically the largest single category in a Houdini project upload — FLIP simulations, Pyro caches, Vellum cloth bakes, Alembic exports, and VDB volumes can each run into tens or hundreds of gigabytes. Two patterns reduce upload time without compromising the render:

  • Compress caches at bake time. .bgeo.sc (compressed bgeo, blosc-compressed) is significantly smaller than .bgeo for the same geometry and is the modern default for File Cache SOPs. For VDB files, the volume is already compressed inside the OpenVDB container, but .tar.gz archives compress the surrounding directory metadata well.
  • Use $HIP/cache/ consistently. Houdini's File Cache SOP defaults to $HIP/cache/{node_name}/$F4.bgeo.sc, which is the right pattern for farm-portable scenes. Avoid absolute cache paths like D:\sim_cache\ — the worker has no D:\ and the render will start, log "cannot find cache file" warnings, and produce empty geometry where the simulation should be.

For very large simulations — multi-terabyte FLIP or Pyro caches that overrun a browser upload — use SFTP rather than the web upload form. The doc covers the SFTP workflow, archive resumability, and the practical thresholds for switching from web upload to SFTP.

A workflow note for studios that cache on the workstation but render on the farm: if your cache directory is on a fast local SSD and your project file uses $HIP/cache/, the cache moves with the project on upload — no manual remapping needed. If your workstation pattern is to cache to a shared network drive and your .hip references that drive directly, you'll need to either copy the caches into $HIP/cache/ and update the File Cache SOP paths, or set a $JOB environment variable on the worker that mirrors the workstation's network share (less reliable; the relative-path approach is preferred).

What to verify before submission

A short pre-flight checklist for any Houdini submission:

  • Active ROP node is set correctly. Output context → Render. The ROP you select at submission time determines which renderer the worker invokes. Mismatched ROPs (e.g., selecting a Karma ROP for a scene whose lighting was authored for Mantra) are the most common cause of "the render looks completely different" tickets.
  • Frame range matches ROP settings. The frame range stored on the ROP (f1, f2, f3 parameters) is what the worker uses, not the timeline's playback range or the viewport's current frame. Confirm the ROP's frame range is what you intend to render.
  • Output path uses $HIP-relative tokens. $HIP/render/$F4.exr is the safe default for multilayer EXR with four-digit padding. Avoid absolute drive-letter paths in the ROP output expression.
  • All File SOPs and texture references resolve. File → Refresh All. Fix any "Unable to read" errors in the Console before submission — the worker will report them too, but at the cost of a wasted render frame.
  • HDAs are either embedded or in $HIP/hda/. Verify by closing the scene completely and reopening it from a different Houdini session; if HDAs fail to load locally, the worker will fail to load them too.
  • Caches are baked. Run a manual cache bake on each File Cache SOP via Render → Save to Disk before submission. Don't rely on "Auto-Bake on Frame Change" — bake explicitly and confirm the cache files exist at the expected $HIP/cache/... paths.
  • USD layers (if Solaris) are bundled or flattened. Either include the full $HIP/usd/ tree or write a flattened composed USD via the USD ROP.
  • No interactive pre-render or post-render scripts. Audit ROP Python callbacks and Pre-Frame Scripts for any UI calls, shell-outs, or workstation-specific paths.

Renderer-specific notes

Karma XPU is SideFX's hybrid CPU+GPU renderer, promoted to production-ready status in Houdini 20.5 and the default forward-looking path for new Houdini projects. It is the primary renderer on our Houdini landing page and the path most new clients on the farm adopt.

Configuration notes:

  • Worker tier: Runs on our RTX 5090 GPU worker tier (32 GB VRAM per card) for the GPU portion of the render, with CPU fallback for any feature the XPU code path does not yet support.
  • VRAM constraints: 32 GB VRAM per worker. Karma XPU is more VRAM-efficient than pure GPU renderers because it can offload portions of the render (volumetrics in particular) to CPU memory when VRAM is constrained — but very dense USD scenes with high-resolution volumes still benefit from staying within the 32 GB envelope.
  • USD pipeline integration. Karma is the renderer designed for the Solaris USD-based pipeline. If your project uses /stage (Solaris LOPs context), Karma is the natural renderer choice and the worker resolves USD asset references the same way it resolves File SOP references — $HIP-relative paths win.
  • AOVs. Configured per-render-product in the Render Settings prim on the USD stage. Multichannel EXR is the default output format and is what we recommend for VFX pipelines (preserves all AOVs in a single file per frame).
  • Sampling. Karma's path-tracing samples are configured per Render Settings prim. Calibrate locally on a single frame before submitting a sequence — XPU sample convergence is different from CPU, and the calibration translates directly to the worker.
  • Motion blur. Karma XPU supports geometry motion blur and shutter-window blur. Confirm that your motion-blur shutter setting on the USD camera prim matches what the Render Settings prim expects — Solaris shutter handling and Karma motion-blur sampling do not always agree on first principles, and the symptom is "the render looks fine but motion blur is missing or doubled."

Karma CPU

Karma CPU is the pure-CPU variant of Karma. Feature-complete and stable since Houdini 19; the natural fallback for scenes that exceed GPU VRAM or rely on features not yet implemented in the XPU code path.

Configuration notes:

  • Worker tier: CPU worker tier (Dual Intel Xeon E5-2699 V4 nodes, 96–256 GB RAM per node, 20,000+ aggregate CPU cores across the fleet).
  • When to use over Karma XPU: very heavy geometry (>50M polygons), dense volumetric rendering that pushes past 32 GB VRAM, OSL custom shaders that don't yet have an XPU equivalent, or projects mixing CPU-heavy simulation passes in the same submission.
  • Same Solaris/USD integration as Karma XPU. The render product and AOV configuration is identical; only the compute backend differs.

Mantra (legacy)

Mantra is Houdini's pre-Karma renderer — SideFX's micropolygon engine that predates the USD-first pipeline. SideFX has signaled that Mantra is not the forward path; Karma is. Mantra remains in the Houdini build for backward compatibility with projects authored before Karma was viable.

Configuration notes:

  • Worker tier: CPU worker tier.
  • Performance. Mantra is generally slower per-frame than Karma CPU for equivalent scenes and lacks the GPU acceleration path Karma XPU provides. New projects should use Karma.
  • When to use. When your project file is locked to Mantra (a long-running production that started before Karma was viable, a shader library that has not been ported), or when you need a Mantra-specific feature (some Mantra micropolygon edge cases have no exact Karma equivalent yet). For new work in 2026, default to Karma.

Redshift for Houdini

Redshift for Houdini runs on our RTX 5090 GPU worker tier. Redshift is the choice for studios with established Redshift pipelines — often Maya or Cinema 4D shops branching into Houdini for FX who want to share shader libraries across DCCs.

Configuration notes:

  • License framing. Redshift on our farm runs under our license. Redshift is now a Maxon product, and our Maxon partnership covers Redshift across all the DCCs we support (C4D, Houdini, Maya).
  • Out-of-core memory. Enabled by default. Extends effective scene memory beyond the 32 GB VRAM ceiling per worker, important for dense scenes that would otherwise OOM on the GPU.
  • Houdini-specific features. Redshift integrates directly with Houdini's volume primitive types (VDB, Pyro caches) — no special export step needed for volume rendering. The Redshift ROP exposes Houdini-native parameters for ray bias, sampling, and AOV configuration.
  • Version pinning. Redshift releases on its own 3.x cycle, independent of Houdini's release cadence. Major Redshift versions (3.0 → 3.5 → 4.0 when it ships) are not guaranteed backward-compatible — a scene saved with Redshift 3.5.18 may not load cleanly on a worker running Redshift 3.0.x. Note the Redshift build at scene-save time and confirm worker compatibility before submitting a full sequence.

Arnold for Houdini

Arnold for Houdini (sometimes called HtoA, currently on the Arnold 7.x release cycle) runs on our CPU worker tier. It is the choice for studios with shared Maya/Houdini Arnold pipelines, where lookdev is authored in one DCC and FX in the other but the shader and rendering layer is unified.

Configuration notes:

  • License framing. Arnold on our farm runs under Autodesk render-node licensing.
  • AOVs. Arnold's AOV system in Houdini works the same as in Maya. Configure per ROP and write to multichannel EXR or per-AOV files; the cross-DCC pattern is consistent.
  • Version pinning. HtoA versions track Arnold's release cadence (HtoA 6.x for Houdini 19.5/20.0; HtoA 7.x for Houdini 20.5/21.0 when it ships). Major HtoA jumps (6 → 7) should never be assumed compatible — confirm the worker has at least the minor version your scene was saved with.

V-Ray for Houdini

V-Ray for Houdini runs on our CPU worker tier. Adoption in Houdini is markedly lower than in 3ds Max or Maya, but the integration is supported for studios with V-Ray-centric pipelines.

Configuration notes:

  • License framing. V-Ray on our farm runs under our license.
  • VRayProxy assets. Supported. Include .vrmesh files in $HIP/geo/ so the worker can resolve them.
  • Houdini specifics. V-Ray for Houdini's ROP exposes the same render settings as V-Ray in 3ds Max — sampler type, render elements (AOVs), output format. The cross-DCC parameter mapping is documented in Chaos's V-Ray for Houdini reference.

Octane for Houdini

Octane for Houdini runs on our RTX 5090 GPU worker tier. Used primarily by motion-design studios bridging Houdini and Cinema 4D for stylized output.

Configuration notes:

  • License framing. Otoy render-node licensing.
  • VRAM constraints. Same as Octane for C4D — 32 GB VRAM per worker, with a more aggressive memory profile than Redshift (no out-of-core path). Scenes that fit in 24 GB on Redshift may need texture downsampling or geometry decimation to fit in 32 GB on Octane.

Submission flow

Three submission channels work for Houdini projects on the farm:

  • Web upload + submit via dashboard. Archive the $HIP folder as .tar.gz or .7z, upload via the website, then configure the job (renderer, ROP node, frame range, output format) and submit. This is the most common path for one-off Houdini jobs and projects under ~50 GB total upload size.
  • SFTP for large projects. Houdini projects with multi-terabyte simulation caches should go via SFTP for resumable transfers. See for the SFTP workflow, credentials, and the threshold for switching from web upload to SFTP.
  • Client App. The Super Renders Farm Client App wraps upload, submit, and auto-download in a desktop application. Useful for studios with recurring submissions where the same project structure is re-rendered with parameter changes. See .

A submission plugin for Houdini that integrates directly with the Houdini UI is in development but not yet pre-installed on all workers. For now, the web dashboard submission flow is the recommended path. The cross-DCC submission walkthrough — what to fill in, how to set frame range, where to find output files — is in .

Under the hood, the worker invokes Houdini's batch rendering entry points: hbatch for HIP-file submissions to OBJ/SOP-context ROPs (Mantra, Karma CPU, Redshift, Arnold, V-Ray, Octane), and husk for USD-stage submissions to Karma (CPU or XPU). You generally don't need to know the underlying invocation, but if you're debugging unexpected output naming or frame-range behavior, the ROP-level settings are what the worker passes to hbatch/husk via command-line flags.

Troubleshooting Houdini-specific failures

For general cross-DCC troubleshooting (asset missing, render failed at frame N, common output format issues), see . Houdini-specific failure patterns we see most often in support tickets:

  • "Unable to find HDA" or HDA fails to load with a stale-node placeholder. The worker cannot locate the HDA file the scene references. Verify HDAs are either embedded in the .hip (Asset menu → Operator Type Manager → "Save to Embedded") or present in $HIP/hda/ with the search path configured. If you reference HDAs from a studio shared library, copy them into the project folder before submitting.
  • Cache file not found at render / empty simulation geometry. Verify the cache files were actually baked to disk before submission — open each File Cache SOP and confirm the "Output" tab shows files at the displayed path. If the path uses an absolute drive letter (D:\sim_cache\flip.0001.bgeo.sc), change to $HIP/cache/{node_name}/$F4.bgeo.sc and re-bake. The 60-second check before upload — File → Refresh All, plus a manual scan of File Cache SOPs — prevents the largest single category of Houdini render failures we see.
  • Render outputs are completely empty / black frames. Check the ROP's "Render Settings" prim (for Karma on a Solaris stage) or the camera reference (for Mantra, Redshift, Arnold, V-Ray, Octane on /obj-context ROPs). The most common cause is a camera that's set in the viewport but not referenced in the ROP — the worker has no viewport, so the ROP-level camera reference is authoritative.
  • Karma XPU fails immediately with "OptiX not available" or "GPU not detected." Rare on our farm because the GPU worker fleet has confirmed CUDA + OptiX driver coverage. If it occurs, the most common cause is a worker mid-update or a driver rollback in progress; re-submit 5–10 minutes later or contact support if the issue persists across multiple submissions.
  • Pre-render Python script fails on worker. Disable the script or make it path-portable. Custom Python that references workstation-specific module paths (your studio's pipeline tools), opens UI dialogs, or shells out to local binaries via hou.system() will not run on a headless Linux worker.
  • Solaris / USD asset references break. USD asset resolver paths need to be $HIP-relative, use a USD resolver context the worker can load, or be flattened into a single composed USD via the USD ROP before submission. Absolute paths in USD reference layers are the most common failure mode here.
  • Plugin version mismatch / scene fails to load. Local plugin version differs from worker — most often HtoA 6 → 7 or Redshift 3.0 → 3.5 jumps. Check the plugin version at scene-save time (visible via the HIP file header text or the plugin's ROP-menu version stamp) and confirm the worker has at least that minor version. Major version jumps should never be assumed compatible.
  • Houdini version mismatch (20.5 scene on 20.0 worker). HIP file format includes version metadata; older Houdini cannot open newer-saved scenes. Confirm the worker has Houdini ≥ the scene-save version, or re-save the scene in the target version if absolutely necessary.
  • OpenCL simulation fails to run on render. OpenCL simulations are bake-time, not render-time. Bake to cache before submission. The farm does not run live OpenCL simulations during rendering — this is by design and applies to FLIP, Pyro, Vellum, and any other OpenCL-accelerated solver.
  • OCIO config drift between submission and worker. If your local OCIO environment variable points at a studio-specific config not present on the worker, colors render under the worker's default config and look different. Bundle the OCIO config file with the project, set OCIO via the submission environment override, or use Houdini's built-in ACES config which the worker also ships.
  • Pyro/FLIP cache "missing field" warning across Houdini versions. Cache file format occasionally changes across major Houdini versions; an older cache loaded on a newer Houdini sometimes drops fields. Re-cache the simulation in the target Houdini version, or confirm the worker uses the same Houdini build that wrote the cache.

Cross-references

  • — upload, submit, download workflow (cross-DCC)
  • — how Houdini job costs are calculated (GPU vs CPU tiers)
  • — SFTP guide, archive formats, large-project transfers
  • — cross-DCC troubleshooting
  • — desktop submission wrapper
  • — landing page (renderer matrix, hardware, pricing examples)

FAQ

Q: Which Houdini versions does the farm support? A: Houdini 19.5, 20.0, and 20.5 are pre-installed on every worker. We track SideFX's release schedule and provision new major builds within four weeks of public availability. Both Houdini FX (.hip) and Houdini Indie (.hipnc) scene files are supported. Houdini Apprentice files render but produce watermarked output per SideFX's non-commercial license terms — for paid production work, save from a non-Apprentice license before submission.

Q: Do I need to transfer my Houdini license to render on the farm? A: No. We operate Houdini under render-only utilization, which permits running Houdini on render workers for offline rendering without occupying a SideFX seat from your studio's license pool. Your local Houdini license stays with you. Super Renders Farm is not a SideFX partner — render-only utilization is the legal framework that allows farm rendering of Houdini scenes.

Q: Should I use Karma XPU, Karma CPU, or Mantra for a new project? A: Karma XPU for new projects in 2026 — it is SideFX's recommended forward path, runs on our RTX 5090 GPU tier, and is significantly faster than Karma CPU or Mantra for most scenes. Use Karma CPU for scenes that exceed 32 GB VRAM, rely on heavy volumetrics that overflow the GPU, or use OSL custom shaders not yet supported in XPU. Use Mantra only when your project is locked to Mantra (a long-running production started before Karma was viable, or a Mantra-specific shader feature with no Karma equivalent). For new work, default to Karma.

Q: Can the farm run Houdini simulations (Pyro, FLIP, Vellum, RBD)? A: No — simulations are workstation work. Bake all simulation caches to .bgeo.sc or .vdb files locally before submission. The farm renders against pre-baked caches; it does not run live simulation as part of the render job. This is the same pattern as Blender or Maya simulation workflows on most managed cloud farms.

Q: My project uses Houdini's Solaris/USD pipeline. Will it render correctly? A: Yes, with Karma (CPU or XPU) as the renderer. The path designed for Solaris is Karma — both renderers consume the USD stage natively and the Solaris/Karma integration is the SideFX-recommended forward pipeline. For cloud submission, either flatten the USD stage to a single composed .usd file via the USD ROP, or bundle the full $HIP/usd/ directory tree so every layer resolves on the worker. Third-party renderers (Redshift, Arnold) can also render USD-based scenes if their Houdini integration supports it — verify locally on a single frame before submitting a full sequence.

Q: My scene uses custom HDAs from our studio's shared library. Will the farm find them? A: The farm will not find HDAs on your studio's shared network drive (\\studio-fs\hda\... or equivalent). Either embed the HDAs in the .hip via Asset menu → Operator Type Manager → "Save to Embedded," or copy the .hda/.otl files into $HIP/hda/ before archiving the project. Verify with hou.hda.loadedFiles() from the Python shell that every loaded HDA resolves under $HIP or is a stock SideFX asset.

Q: How do I package a Houdini project that uses simulation caches? A: Bake every simulation to $HIP/cache/{solver_name}/$F4.bgeo.sc (or .vdb for volumes) locally before submission. Verify the cache files exist at the expected paths via File → Refresh All. Archive the entire $HIP folder — including the cache/ subfolder — as .tar.gz or .7z. For caches over a few hundred gigabytes, upload via SFTP rather than the web form. The farm renders against the baked caches; the simulation itself runs on your workstation, not on a worker node.

Q: How large can a Houdini project upload be? A: There is no hard upload size limit, but we recommend keeping a single browser upload under ~50 GB. Above that, switch to SFTP for resumable transfers — see . The farm has handled multi-terabyte Houdini sim renders, all uploaded via SFTP with proper directory structure preserved.

Q: My Mantra render is much slower on the farm than locally. Is this expected? A: Mantra's per-frame speed on our CPU worker tier (Dual Xeon E5-2699 V4) is comparable to a high-end workstation. If you're seeing significantly slower per-frame times than local, the most likely causes are: a different sampling configuration on the worker (Mantra reads samples from the ROP — confirm the ROP-level settings match), or local OpenCL acceleration that's not active on the worker's CPU-only tier. The structural fix is migration: Karma XPU on our GPU tier is significantly faster than Mantra for most scenes, and Karma is the path SideFX recommends going forward.

Q: Does the farm support Houdini's PDG/TOPs for distributed task management? A: PDG is workstation-side orchestration; the farm uses its own queue and worker assignment system. You can use PDG locally to author and bake assets, then submit the final ROP jobs to the farm via the web dashboard or Client App. Direct PDG-driven submission to external farms is on our roadmap but not yet a first-class workflow.

Q: How is cost calculated for Houdini cloud rendering? A: Houdini cost on the farm tracks the renderer's worker tier — GPU rates for Karma XPU, Redshift, and Octane; CPU rates for Karma CPU, Mantra, Arnold, and V-Ray for Houdini. The per-frame complexity (sample count, AOV count, output resolution) drives the per-frame cost; the renderer choice drives the per-node-hour rate. Simulations cost extra only if you run them on the farm (we recommend caching locally and uploading the cache). For pricing details, see or estimate at our .

---

Set up Houdini for cloud rendering
Set up Houdini for cloud rendering
Last updated: May 13, 2026