Jump to content

PC Render Farm


Paul Bruening

Recommended Posts

  • Premium Member

If you've got Excel, here's a breakdown for a 4 unit render farm in PC that I crunched from current, TigerDirect numbers and inventory. The idea is to run a hotrod workstation and dump the hard work onto the render farm. You can run Adobe's free render engine on them to crunch all your really big AE files. Same approach goes for Maya.

 

Enjoy.

Link to comment
Share on other sites

  • Premium Member
Paul, if you wanna e mail it to me, i can stick it on my website and provide a link in 'bout 3 hours (when i get home)

 

You're not going to believe this, but, Yahoo is goofing up on my computer. I can't get into my mail. Give me a day or so and maybe we can do your suggestion. Thanks for the help.

Link to comment
Share on other sites

  • Premium Member

Interesting, I would go with rack mount cases. If you go with 3-unit cases normal hardware will fit into them, 1 or 2u cases are obviously more space efficient, but the hardware is a little tricky.

 

Also, depending on the type of material you are rendering, I don't really see the need for so much system storage. In fact, I don't really see the need to even separate your OS drives from your cache drives in your render nodes.

 

On our render farm, we are using the "dumbest" computers possible for our nodes (except for a good possessor and enough RAM). We feed the nodes on the farm from a fast RAID, because reading and writing from multiple nodes on the render farm is pretty intensive. This RAID could exist on a beefier computer or, traditionally a SAN.

 

A problem one starts to run into as you grow your render farm, and one we are dealing with right now is that you start to max out the performance of gigabit and need to move up to a more expensive interface (such as Fibre).

 

Kevin Zanit

Link to comment
Share on other sites

  • Premium Member
Interesting, I would go with rack mount cases. If you go with 3-unit cases normal hardware will fit into them, 1 or 2u cases are obviously more space efficient, but the hardware is a little tricky.

 

Also, depending on the type of material you are rendering, I don't really see the need for so much system storage. In fact, I don't really see the need to even separate your OS drives from your cache drives in your render nodes.

 

On our render farm, we are using the "dumbest" computers possible for our nodes (except for a good possessor and enough RAM). We feed the nodes on the farm from a fast RAID, because reading and writing from multiple nodes on the render farm is pretty intensive. This RAID could exist on a beefier computer or, traditionally a SAN.

 

A problem one starts to run into as you grow your render farm, and one we are dealing with right now is that you start to max out the performance of gigabit and need to move up to a more expensive interface (such as Fibre).

 

Kevin Zanit

 

As you mentioned, GBLAN is a noticeable bottleneck. In some cases, it's faster to manually pull and move drives from render units to workstation and vice versa. That's why I keep my OS drives separate. That, and the fact that you may do work for others and need to ship them off.

 

This is not supposed to be the best design. It's mostly about being the cheapest route to getting a home editing system capable of handling larger files.

Link to comment
Share on other sites

  • Premium Member

No doubt, and that's why its important to design your system based on the type of work you are doing. If you are rendering single large video clips, then the storage on each node becomes more of a factor. Farming out large video clips is a lot harder to get good network and single drive performance when compared to sequential images.

 

In practice, most low to mid range 7200rpm SATA drives are good for 75-100 megabytes a second with gigabit good for 125 megabytes max, so your right, the bottleneck when dealing with large video clips is pretty serious.

 

The nice thing about single video clips is they move around from drive to drive a lot faster than sequential images. For example, using a 5 drive array running RAID5 turbo over 2gigabit fibre, moving tons of sequential images we get around 100 megabytes /sec (which is really slow), but when moving around fewer large video clips we get around 230 megabytes /sec (approaching our 2Gbit fibre limit). These speeds are around the same in NTFS and XFS file systems.

 

I guess the point is, with all these systems, lots of experimenting is needed to get the best performance, though it can be fun!

 

Kevin Zanit

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...