VFT | OSL shaders in Blender

For my graduation project VFX Fractal Toolkit (VFT) I developed a couple of Open Shading Language (OSL) shaders to render volumetric fractals. I produced with them the following animations.

As a renderer I used Arnold and was pretty satisfied with the workflow. However I was wondering how difficult would it be to use the same OSL shaders in another renderer, e.g. Blender’s Cycles. Especially because I was trying to mimick OpenCL’s syntax in OSL (not the best idea, but helped me port OpenCL shaders which I built first), included couple of header files etc..

As it turned out it was a pretty quick process and everything worked out of the box, requiring only minimal changes. The only thing I had to change was to properly set up multiple node outputs. Arnold doesn’t support multiple outputs for an OSL node so I worked around it by outputting a 4×4 matrix type with encoded values in it and extracting them afterwards. This makes Blender’s shaders nicer and more readable, you can check the diff here.

The shading setup in Blender looks pretty similar to the one used in Arnold and also outputs similar results.

Screenshot from 2019-06-19 18-37-47

For the testing I used Blender 2.80 beta running in a Docker container.

Keep in mind that rendering such volumes is pretty slow and neither Arnold nor Cycles support OSL on GPU. I included the changes in blender branch of the repository (not for all OSL shaders yet, but you get the idea :)).

Advertisements
VFT | OSL shaders in Blender

Web experiments

Recently I got interested in web technologies. I was surprised to learn how much is possible to do in a web browser. I found out about new interesting APIs and a huge ecosystem of libraries, many of which are from areas of my interest.

So I decided to learn some basic JavaScript and to build small experiments in my spare time. I am collecting them in one repository for now. You can fork the repo and host them locally or visit them here.

Some of the benefits I like are portability and cross-platform compatibility. Most of the tests run fine in major web browsers on all operating systems. They run also fine on mobile devices, some of them are even intended for mobile devices, like the AR tests. I found it being a really convenient (and free) platform for building prototypes and tests.

Here I will describe some of the projects.

Continue reading “Web experiments”

Web experiments

megaH | Megascans to Houdini integration

Hello, this post will introduce a project I have been developing with friends for some time: Megascans to Houdini integration, or megaH. This will be a high-level overview of the tools and workflows we have developed so far.

image3

This article is written by authors of this tool: me, Peter Trandžík and Ondrej Poláček.

MegaH is currently work in progress and is being used in two student productions. It is not finished and ready for release yet. The two versions are customized to specific pipelines and needs of the corresponding productions.

It is developed and tested on Linux and Windows and with Mantra and Redshift renderer.

Also note that while we used the Megascans library, this project could be ported to include different libraries, for example VFX studios’ internal asset/setup libraries.

Continue reading “megaH | Megascans to Houdini integration”

megaH | Megascans to Houdini integration

Photogrammetry 2 | 3D reconstruction and post processing

This is the second article describing our photogrammetry workflow which I developed for a student production. You can find the first article here.

In this article I will go over our attempts on reconstruction of photos in various applications and our semi-automatic post processing workflow based on Houdini. (Houdini asset can be downloaded at the bottom of the page.)

Continue reading “Photogrammetry 2 | 3D reconstruction and post processing”

Photogrammetry 2 | 3D reconstruction and post processing

Houdini 2 VR

Hello,
I would like to share with you a tool I have been developing lately – Houdini 2 VR.

Currently Houdini doesn’t have a tool for previewing (stereo/mono) VR renders in a VR headset. All renderers support this output format, but the preview process is a bit of a bottleneck. You usually need to leave the DCC application, load the render in another app, e.g. Nuke and judge the visual quality there. This adds a bit of time to each iteration and makes the preview process cumbersome.

So I tried to simplify this process. Sending pictures to HMDs can get very technical and low-level but I rather took a high-level approach: Python + WebVR. Or more precisely Python on the Houdini side and WebVR on the side of a web browser. Using a web browser means that I have to leave Houdini for previewing VR render, but with Python I tried to make it as automatic as possible.

Check this video to see how my tool works:

Continue reading “Houdini 2 VR”

Houdini 2 VR

Setting up Houdini on a headless linux server

I will go briefly through the process of setting up Houdini on a headless linux server.

In our project we had access to Nvidia VCA hardware which was running CentOS 7.3 and did not have a X server.

This hardware has some decent computing power in it and we wanted to offload some of our rendering on this computer. We were rendering with Redshift renderer which scaled pretty well on multiple GPUs. We also just fitted into Redshift’s limit of max 8 GPUs.

To be able to do more general-purpose jobs (simulations, caching) on the VCA and to simplify submission process we decided to setup both Houdini and Redshift on it.

Continue reading “Setting up Houdini on a headless linux server”

Setting up Houdini on a headless linux server