Build vs Cloud Render Farm: The Real Cost Breakdown
Introduction
Every year, we talk to studios going through the same decision: build a local render farm or send jobs to the cloud. The conversation usually starts with a spreadsheet comparing hardware prices and per-hour cloud rates. And every year, we see studios get blindsided by the same hidden costs — especially licensing.
We've been operating a production render farm since 2010, supporting V-Ray, Corona, Redshift, Arnold, and every major DCC application. Over 15 years, we've learned that the upfront hardware purchase is the easy part. It's the ongoing costs — render engine licenses, software maintenance, power, cooling, IT time — that quietly eat into the budget and change the math entirely.
This article breaks down the total cost of ownership (TCO) for both paths: building your own render farm and using a cloud render service. We'll use real numbers where we can, flag the costs that most calculators miss, and give you a framework for deciding which path makes sense for your studio.
The Licensing Trap: Where Most TCO Calculations Go Wrong
If you're planning a local render farm, you've probably priced out the hardware already. CPUs, GPUs, RAM, storage — those are the visible costs. But the line item that catches most studios off guard is render engine licensing for every node.
Here's what render node licensing actually looks like in 2026 (prices based on official vendor pricing pages as of Q1 2026 — always verify current rates before budgeting):
| Render Engine | License Type | Annual Cost Per Node | Volume Pricing | Source |
|---|---|---|---|---|
| V-Ray | Render node subscription | $208/yr (single) | $167–$188/yr in 10–20 packs | VRAY.US (authorized Chaos reseller) |
| Corona | Render node subscription | $172/yr (single) | $140–$154/yr in 10–5 packs | Novedge (authorized reseller) |
| Redshift | Individual or Teams sub | $264/yr (individual) | $299/yr (Teams, min 3 seats) | Maxon pricing |
| Arnold | Single-user subscription | $415/yr | Includes 5 render nodes; additional nodes priced separately | Autodesk |
| Octane | Enterprise / render node | Varies by tier | OctaneRender Enterprise required for farm use | OTOY |
Pricing via authorized resellers as of Q1 2026. Contact vendors directly for current rates — pricing changes periodically.
These costs compound fast. A 10-node GPU farm running Redshift on Teams pricing costs $2,990 per year just in render engine licensing — before you've rendered a single frame.
An important nuance: not all DCC applications charge for render nodes. 3ds Max and Maya allow free command-line rendering on farm nodes without additional licenses (Autodesk documentation). But Cinema 4D requires a Team Render license for each node, and other applications have their own policies. This inconsistency adds complexity — you need to check licensing terms for every piece of software in your pipeline, not just the render engine.
What makes this particularly painful is that licensing terms change. Maxon now requires a minimum of 3 seats for their Teams pricing structure, and studios that previously used individual node-locked licenses have reported being migrated to Teams when exceeding certain seat counts. We've seen studios discover this mid-budget cycle — the email arrives, and the per-seat cost jumps from $264 to $299, a 13% increase that wasn't in the original plan.
This isn't unique to Maxon. Chaos Group, Autodesk, and other vendors have been shifting toward subscription models with render-node tiers that add cost at each scaling step. For a studio adding 3–5 nodes to handle a big project, the incremental licensing cost alone can run $1,500–$3,000 per year.
On a managed cloud render farm, these licenses are typically included in the per-job or per-hour rate. When we price a render job, the V-Ray, Corona, or Redshift licensing is already baked in — the studio doesn't need to purchase, install, or manage render node licenses separately.
Hardware: The Upfront Investment and Its Shelf Life
Let's price out a realistic local render farm for a small-to-mid studio (the 5–15 seat range, which is where most of this decision happens).
Scenario: 10-node CPU render farm for V-Ray/Corona
| Component | Per Node | 10 Nodes |
|---|---|---|
| Dual-socket workstation (Xeon/EPYC) | ~$4,000–$6,000 | $40,000–$60,000 |
| 128 GB RAM | ~$400 | $4,000 |
| 1 TB NVMe storage | ~$100 | $1,000 |
| 10GbE networking (switch + cables) | — | $2,000–$3,000 |
| Rack, UPS, cooling | — | $3,000–$5,000 |
| Total hardware | $50,000–$73,000 |
Scenario: 5-node GPU render farm for Redshift/Octane
| Component | Per Node | 5 Nodes |
|---|---|---|
| GPU workstation (RTX 4090 or 5090) | ~$5,000–$9,000 | $25,000–$45,000 |
| 64 GB RAM | ~$200 | $1,000 |
| 1 TB NVMe | ~$100 | $500 |
| 10GbE networking | — | $2,000 |
| Rack, UPS, cooling (GPU = more heat) | — | $4,000–$6,000 |
| Total hardware | $32,500–$54,500 |
These numbers look reasonable. But hardware depreciates. A GPU render farm built around RTX 3090s in 2022 is already significantly outperformed by RTX 5090 nodes — approximately 3x in V-Ray and Blender GPU rendering workloads, according to Puget Systems benchmarks. (Redshift and Octane Blackwell support is still maturing as of early 2026, so benchmark parity may vary.) Most studios depreciate render hardware over 3–4 years, which means the annual hardware cost is really $13,000–$24,000 for the CPU farm or $8,000–$18,000 for the GPU farm.
Hardware prices in this article reflect US retail and configurator pricing as of Q1 2026 (sources: Puget Systems, B&H Photo, manufacturer MSRPs). Actual costs vary by vendor, region, and availability.
And here's what the hardware spreadsheet misses: you're locked into that capacity. If you build 10 nodes, you have 10 nodes — whether you need them for 2 hours or 200 hours this month. Cloud rendering scales to zero when you're not using it.
The Hidden Operational Costs
Hardware and licensing get spreadsheet columns. These costs usually don't:
Electricity and cooling. A 10-node CPU farm draws roughly 5–8 kW under render load. At the US average commercial electricity rate of ~$0.14/kWh (EIA, early 2026), running at 50% utilization costs roughly $3,700–$5,900 per year. GPU nodes draw more — the RTX 5090 has a TDP of 575W, so a 5-node setup draws approximately 2.9 kW from GPUs alone, plus ~1.5–2 kW system overhead for a total of 4–5 kW under render load. Cooling adds 30–40% on top in a typical office server room (based on PUE guidelines from the Uptime Institute — purpose-built data centers achieve lower overhead, but most studio server rooms don't).
IT administration time. Someone has to maintain the farm. Software updates, driver patches, license server troubleshooting, render manager configuration, storage management, OS updates that break render plugins at 2 AM — it adds up. Based on conversations with dozens of studios over the years, we've consistently heard estimates of 5–10 hours per week spent on farm maintenance for teams without dedicated render ops. At $50–$80/hr (estimated internal cost of a technical person's time, based on industry averages for render wranglers and pipeline TDs), that's $13,000–$41,600 per year in labor. We use the midpoint of this range — roughly $25,000/year — in our TCO calculations, as it aligns with what we've seen most often.
Plugin and DCC version management. This one is subtle but real. Every worker node must have the exact same version of your DCC application, render engine, and plugins. Forest Pack 8.1.1 and 8.1.2 behave differently in distributed rendering. V-Ray build 52003 and 52004 can produce different results. When you update one machine, you update all of them — and you test before pushing to production, or you risk failed jobs and wasted render time. On our farm, we have a dedicated pipeline team managing version synchronization across all nodes. Most small studios handle this ad hoc, and the debugging time is significant.
Storage and network infrastructure. Render jobs need fast access to scene files, textures, and output directories. A render farm on a slow NAS will spend more time waiting for I/O than actually rendering. Proper shared storage (10GbE NAS or distributed storage) adds $3,000–$8,000 upfront and ongoing maintenance cost.
TCO Comparison: A Real-World Example
Let's put it all together for a concrete scenario: a 12-person archviz studio rendering 80–120 hours of V-Ray CPU work per month.
Option A: Build a 10-node local CPU farm
| Cost Category | Annual Cost | Notes |
|---|---|---|
| Hardware (depreciated over 4 years) | $12,500–$18,250 | |
| V-Ray render node licenses (10 nodes) | $1,673–$2,080 | $167–$208/node depending on volume |
| DCC application licenses (render nodes) | $0–$2,500 | $0 for 3ds Max/Maya (free CLI rendering); up to $250/node for C4D Team Render |
| Electricity + cooling (50% utilization) | $4,800–$7,700 | Based on $0.14/kWh US commercial avg (EIA) |
| IT administration (8 hrs/week × $60/hr) | $24,960 | Midpoint estimate; see methodology above |
| Storage infrastructure (amortized) | $1,500–$2,000 | |
| Software updates / troubleshooting | $2,000–$4,000 | Estimated labor |
| Total annual TCO | $47,433–$59,490 |
TCO range depends heavily on DCC choice (3ds Max/Maya = $0 render node licenses vs Cinema 4D = additional cost) and V-Ray volume pack selection.
Option B: Cloud render farm (fully managed)
Managed cloud render farm rates vary by provider and hardware, but to anchor this comparison in real numbers: CPU rendering on managed farms typically runs $1.50–$6.00 per server-hour based on published pricing from major providers, depending on hardware specs, priority tier, and volume discounts.. On our farm, a V-Ray CPU server-hour runs $2.00 (Dual Xeon E5-2699 V4, 64–256 GB RAM), with volume discounts from 5% to 30% — see our pricing page for current rates.
Using our rate as a concrete example:
| Cost Category | Annual Cost | Notes |
|---|---|---|
| Render hours (100 server-hrs/month × $2/hr) | $2,400/yr (list) | $1,680–$2,040/yr with 15–30% volume discount |
| Upload/download bandwidth | $0 | Included |
| Licensing (V-Ray, Corona, Arnold, etc.) | $0 | Included in per-hour rate |
| IT administration | ~$0 | Upload scene, download output |
| Hardware maintenance | $0 | |
| Total annual TCO | $1,680–$2,400 | Depending on volume discount tier |
Other managed farms will produce different numbers — the rates above are from our pricing page as of March 2026. For an apples-to-apples comparison with your preferred provider, multiply their server-hour rate by your estimated monthly usage.
The difference is substantial at this usage level. The local farm costs $47,000–$60,000/year in total ownership costs for the same 100 server-hours per month that costs $1,680–$2,400/year on a managed cloud farm. Even if you double the cloud rate to account for a more expensive provider, the cloud option remains a fraction of the local farm TCO at this usage level.
But context matters. If your studio renders 500+ hours per month consistently, the per-hour cloud cost adds up and a local farm may break even. The crossover point depends on your utilization rate, and that's what most studios get wrong — they plan for peak capacity but actually use 25–40% of it over a full year.
The Utilization Problem: Why Local Farms Cost More Than Expected
Here's the math most studios don't do: what percentage of the time is your farm actually rendering?
We've seen this pattern repeatedly. A studio builds a 10-node farm for a big project deadline. The farm runs at 90% utilization for 3 weeks. Then the project ships, and those 10 nodes sit idle for 2–3 months while the team does design development and client revisions. Annual utilization ends up at 25–40%.
At 30% utilization, those 10 nodes are effectively 3 nodes — but you're paying electricity, licensing, cooling, and maintenance for all 10 year-round. The annual TCO per useful render hour skyrockets.
Cloud rendering eliminates this problem entirely. You pay for the hours you use. A 200-hour month costs more; a 10-hour month costs almost nothing. There's no idle capacity burning money.
This is especially relevant for studios with seasonal workloads — archviz firms that spike before property launches, VFX studios that scale for episodic deadlines, motion design shops with campaign cycles. If your render demand varies by more than 3x between your busiest and slowest months, a local farm will have poor utilization economics.
When Building Your Own Farm Actually Makes Sense
We're not going to pretend cloud is always the right answer. Local farms make genuine economic sense in specific scenarios:
Consistent, high-volume rendering. If your studio renders 400+ hours per month, every month, with minimal seasonal variation — a local farm's fixed costs spread across enough utilization to compete with cloud per-hour rates. Large VFX houses and animation studios often hit this threshold.
Data security or compliance requirements. Some clients (government, defense, medical) require rendering on infrastructure you physically control. No cloud farm can satisfy an air-gapped data policy. If your contracts mandate on-premises processing, that's a real constraint that justifies the TCO premium.
Real-time iteration needs. If your workflow requires constant interactive rendering, submitting to a cloud farm and waiting for results introduces latency that kills creative iteration. Local GPU nodes for IPR and lookdev, combined with cloud for final frame rendering, is a common hybrid approach.
Existing IT infrastructure. If your studio already has a server room, IT staff, networking, and cooling — the marginal cost of adding render nodes is much lower. The TCO calculation changes significantly when you're not building infrastructure from scratch.
Decision Framework: Which Path Fits Your Studio?
Rather than a simple "build vs. buy" answer, here's a framework based on what we've seen work across studios of different sizes:
| Factor | Favors Local Farm | Favors Cloud Rendering |
|---|---|---|
| Monthly render hours | >400 hrs consistently | <200 hrs or highly variable |
| Team size | 20+ with dedicated IT | 3–15 without dedicated IT |
| Utilization pattern | Steady, predictable | Seasonal, project-based |
| Data requirements | Air-gapped / on-prem mandated | Standard commercial NDA sufficient |
| Capital budget | Large upfront available | Prefer OpEx over CapEx |
| Hardware refresh cycle | Willing to refresh every 3–4 years | Want access to current-gen hardware always |
| Plugin/version management | Have pipeline TD on staff | No dedicated pipeline person |
For the studios in the middle — rendering 150–300 hours monthly with some seasonal variation — a hybrid approach often works well. Keep a few local nodes for interactive work and quick test renders, then burst to a cloud render farm for final production frames.
What "Fully Managed" Means for TCO
Not all cloud rendering is equal in terms of TCO impact. There's a significant difference between IaaS GPU rental (where you still manage software, licenses, and configuration) and a fully managed render farm where those operational costs are eliminated.
On an IaaS platform, you get raw compute — but you still need to install your DCC software, configure your render engine, purchase and manage floating licenses, handle plugin versioning, and troubleshoot render failures yourself. The licensing trap applies just as much to cloud VMs as it does to local nodes.
On a fully managed farm, the operational layer is handled by the provider. As an official Chaos and Maxon partner, we maintain current licenses for V-Ray, Corona, Redshift, Arnold, and all supported DCC applications across our fleet. When a studio submits a job, they're not thinking about whether their Redshift license covers render nodes — that's already included.
The TCO difference between IaaS and fully managed is meaningful. Based on studios we've worked with who switched from self-managed cloud VMs, the most common savings come not from a lower per-hour rate but from eliminating infrastructure management time — which, as we discussed above, can account for $13,000–$25,000+ per year in labor for a small studio. The actual savings percentage varies widely depending on how much time a studio was spending on ops, but for teams without dedicated render pipeline staff, it's often the largest single cost reduction in the switch.
Summary: The Numbers That Matter
If you're evaluating whether to build a render farm or use cloud rendering, here's what to put in your spreadsheet beyond hardware costs:
| Often Missed Cost | Typical Annual Range (10-node farm) | Applies To |
|---|---|---|
| Render engine licenses | $1,670–$2,990 (V-Ray/Redshift, volume pricing) | Local + IaaS cloud |
| DCC application licenses (render nodes) | $0–$2,500 (varies by software — 3ds Max/Maya = $0; C4D = additional) | Local + IaaS cloud |
| Electricity + cooling | $4,800–$10,000 | Local only |
| IT administration labor | $13,000–$25,000+ (midpoint ~$25K for most studios) | Local + IaaS cloud |
| Plugin version management | $2,000–$5,000 (labor) | Local + IaaS cloud |
| Idle capacity cost | 60–75% of hardware investment wasted at typical 25–40% utilization | Local only |
| Hardware depreciation | ~25% per year (3–4 year cycle) | Local only |
The total cost of building and operating a 10-node render farm is typically $47,000–$60,000 per year when all costs are included — with IT labor and hardware depreciation as the two largest line items, not licensing as most studios expect.
For a more detailed look at cloud rendering pricing models, see our render farm pricing guide for 2026.
FAQ
Q: How much does it cost to build a 10-node render farm? A: Hardware alone runs $50,000–$73,000 for CPU nodes or $32,500–$54,500 for GPU nodes. But the true annual cost — including render engine licensing, electricity, cooling, and IT labor — typically reaches $47,000–$60,000 per year when all costs are included and hardware is depreciated over 3–4 years.
Q: Do I need separate render engine licenses for each node in my farm? A: For render engines — yes. V-Ray, Corona, Redshift, Arnold, and most commercial renderers require a per-node license for farm use. For DCC applications, it depends: 3ds Max and Maya allow free command-line rendering on farm nodes, while Cinema 4D requires Team Render licenses. Always check current terms per vendor. Managed render farms typically include all licensing in their pricing.
Q: What is the biggest hidden cost of running a local render farm? A: IT administration time. Studios without dedicated render ops staff typically spend 5–10 hours per week maintaining a render farm — software updates, driver patches, license troubleshooting, and version management. At $50–$80/hr, this costs $13,000–$41,600 annually.
Q: At what render volume does building a local farm make sense over cloud rendering? A: Generally above 400 consistent render hours per month with steady utilization. Below 200 hours monthly, cloud rendering is almost always more cost-effective. Between 200–400 hours, it depends on your utilization pattern and whether you have existing IT infrastructure.
Q: Does Maxon's Teams pricing affect render farm licensing costs? A: Yes. Maxon's Teams pricing requires a minimum of 3 seats and costs $299/yr per seat versus $264/yr for individual subscriptions. Studios that previously held multiple individual Redshift licenses have reported being migrated to Teams plans, resulting in a 13% per-seat increase. This particularly impacts small studios (5–15 seats) scaling their render capacity.
Q: What's the difference between IaaS cloud rendering and a managed render farm for TCO? A: On IaaS (like renting GPU VMs), you still pay for render engine licenses, manage software installation, and handle troubleshooting yourself. A fully managed render farm includes licensing, software configuration, and technical support in the per-hour rate — eliminating the operational overhead (IT labor, license management, version control) that can represent a significant portion of total rendering cost for studios without dedicated pipeline staff.
Q: How does hardware depreciation affect render farm TCO? A: GPU hardware typically depreciates over 3–4 years, with significant performance gaps emerging within 2 generations. An RTX 3090 farm built in 2022 delivers roughly one-third the throughput of current RTX 5090 nodes in V-Ray and Blender GPU workloads (per Puget Systems benchmarks). This means your effective cost per render hour increases as hardware ages, even though your cash outlay stays the same.
Q: Can I use a hybrid approach — local farm plus cloud rendering? A: Many studios find this approach works well. Keep a few local nodes for interactive rendering, lookdev, and quick test frames, then burst to a cloud render farm for final production rendering. This minimizes idle local capacity while keeping iteration fast. For archviz and VFX studios with variable workloads, hybrid often delivers better TCO than committing fully to either path.
About Alice Harper
Blender and V-Ray specialist. Passionate about optimizing render workflows, sharing tips, and educating the 3D community to achieve photorealistic results faster.



