I would like to share the first test of a fun project I recently started working on in my spare time – Open3D to Houdini integration.
I built Houdini wrappers around Open3D functionality, which enable me to load point clouds, pre-process (downsample, estimate normals, compute FPFH feature) and perform Global and ICP registration on them to align them tightly:
I bidirectionally translate Houdini and Open3D geometry, so the Open3D operations can be seamlessly combined with Houdini SOPs. This doesn’t seem to be the case in the Reality Capture or Alice Vision plugins, which don’t enable Houdini to modify the geometry.
The integration is in early version and has some issues. I tested it only on a Linux so far. I will open-source it after I fix the problems and write some info on setting it up.
I would like to share my graduation project VFX Fractal Toolkit. I finished my TD studies at Filmakademie last month and had a chance to present the results at the last FMX. You can find out more about the VFThere, where you can also check the code.
Hello, this post will introduce a project I have been developing with friends for some time: Megascans to Houdini integration, or megaH. This will be a high-level overview of the tools and workflows we have developed so far.
MegaH is currently work in progress and is being used in two student productions. It is not finished and ready for release yet. The two versions are customized to specific pipelines and needs of the corresponding productions.
It is developed and tested on Linux and Windows and with Mantra and Redshift renderer.
Also note that while we used the Megascans library, this project could be ported to include different libraries, for example VFX studios’ internal asset/setup libraries.
This is the second article describing our photogrammetry workflow which I developed for a student production. You can find the first article here.
In this article I will go over our attempts on reconstruction of photos in various applications and our semi-automatic post processing workflow based on Houdini. (Houdini asset can be downloaded at the bottom of the page.)
In this post I will show a simple technique for FLIPs and RBDs interaction. Like in this example:
I decided to give it a try after talking to great FX TD Adam Guzowski. The task was like this: doing a setup where water would fracture RBD objects, would break RBD constraints and RBD objects would be pre-fractured based on the expected water flow.
I would like to share with you a tool I have been developing lately – Houdini 2 VR.
Currently Houdini doesn’t have a tool for previewing (stereo/mono) VR renders in a VR headset. All renderers support this output format, but the preview process is a bit of a bottleneck. You usually need to leave the DCC application, load the render in another app, e.g. Nuke and judge the visual quality there. This adds a bit of time to each iteration and makes the preview process cumbersome.
So I tried to simplify this process. Sending pictures to HMDs can get very technical and low-level but I rather took a high-level approach: Python + WebVR. Or more precisely Python on the Houdini side and WebVR on the side of a web browser. Using a web browser means that I have to leave Houdini for previewing VR render, but with Python I tried to make it as automatic as possible.
I will go briefly through the process of setting up Houdini on a headless linux server.
In our project we had access to Nvidia VCA hardware which was running CentOS 7.3 and did not have a X server.
This hardware has some decent computing power in it and we wanted to offload some of our rendering on this computer. We were rendering with Redshift renderer which scaled pretty well on multiple GPUs. We also just fitted into Redshift’s limit of max 8 GPUs.
To be able to do more general-purpose jobs (simulations, caching) on the VCA and to simplify submission process we decided to setup both Houdini and Redshift on it.