I would like to share with you a tool I have been developing lately – Houdini 2 VR.
Currently Houdini doesn’t have a tool for previewing (stereo/mono) VR renders in a VR headset. All renderers support this output format, but the preview process is a bit of a bottleneck. You usually need to leave the DCC application, load the render in another app, e.g. Nuke and judge the visual quality there. This adds a bit of time to each iteration and makes the preview process cumbersome.
So I tried to simplify this process. Sending pictures to HMDs can get very technical and low-level but I rather took a high-level approach: Python + WebVR. Or more precisely Python on the Houdini side and WebVR on the side of a web browser. Using a web browser means that I have to leave Houdini for previewing VR render, but with Python I tried to make it as automatic as possible.
I will go briefly through the process of setting up Houdini on a headless linux server.
In our project we had access to Nvidia VCA hardware which was running CentOS 7.3 and did not have a X server.
This hardware has some decent computing power in it and we wanted to offload some of our rendering on this computer. We were rendering with Redshift renderer which scaled pretty well on multiple GPUs. We also just fitted into Redshift’s limit of max 8 GPUs.
To be able to do more general-purpose jobs (simulations, caching) on the VCA and to simplify submission process we decided to setup both Houdini and Redshift on it.
In this post I will show you how to execute a Houdini (or any other) job remotely on a Windows machine. The remote machine in our case did not have a GPU and my goal was to make it automatic so the job was started from command line.
Style transfer can easily produce quite interesting results unlike filters we already know from e.g. Adobe Photoshop. While style transfer might be cumbersome to set up and control, it is definitely an interesting image processing technique to experiment with and which can produce novel looks.
In the next couple of posts I will be describing photogrammetry workflow which I am creating for a student production. I am planning to describe here the whole process, from processing pictures to exporting game-ready assets. Game-ready assets because our production requires realtime renderings, but the process can be easily altered to produce VFX-ready assets (but this will probably require more manual work e.g. retopo, UVs to meet stricter quality requirements).
I have a tight schedule for this project, so I will try to stay simple, efficient, but try to be as correct as possible.
In this post I will describe our photos processing setup and colors workflow. This will result in photos with mask in alpha channel and in linear color space, ACES in our case. After that the photos should be ready for photogrammetry software, but that is for another post 🙂