Two Guys and a Toolkit - Week 5: Publishing from Maya to Nuke

Josh Tomlinson    ●    Oct 15, 2015

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!

Publishing from Maya to Nuke
Hi everyone! Josh here. Welcome back for part five of our series dedicated to building a simple pipeline using Toolkit.

Last week Jeff talked about PublishedFile grouping as a way of associating individual publishes as an organizational or representation tool within the pipeline. If you haven’t read it yet, I highly recommend it.

Are any of you doing anything similar to package rigs and models together? Or are you presenting a collection of render publishes to compositors as a single item in the loader? How are you leveraging Shotgun entities in unique ways to store your pipeline’s metadata and to present information to your users in a clean, easy-to-understand fashion? Let us know in the comments!

This week we’re going to stay in the realm of publishing and try to answer one of the most common questions I’ve heard recently:
“How do I publish my Maya renders to Nuke?”
We had 3 support tickets in the past two weeks asking for information on how to set up this hand off. We also heard from Toolkit veteran and recent Pipeline Award winner, Benoît Leveau in the comment section of the week 3 blog post.
“What was really missing from the publishing hooks when I installed Toolkit was a way to publish renders from Maya. This sounds like the most basic thing a pipeline should do, yet it's not mentioned anywhere. I had to look at how this was done in Nuke and replicate a similar behavior in Maya, which made me understand a lot about the template and context systems in Toolkit (which is a good thing) but having examples of it would be great when starting.”
-Benoît Leveau
The lighting to compositing hand off was something we needed to build for our simple pipeline as well. So I think it makes a lot of sense to dedicate this week’s post to taking a look at what is necessary to get published frames from Maya into Nuke using Toolkit, and putting together an example implementation that new Toolkit users can reference.

How do we get rendered images from Maya to Nuke?

For this post, you can find all of the necessary pieces in our tk-config-simple and tk-framework-simple repos. I’ve put plenty of comments in the code to help explain each little bit, but if you have any questions, don’t hesitate to ask. If you’ve built this type of workflow for your studio and you have ideas about how to improve this example, I would absolutely love to hear your thoughts!

Identifying what is needed

When the tickets came in asking how to setup publishing frames to Nuke, I have to admit that I was surprised there wasn’t much information in the Toolkit docs. I found this demo video showing how to use Toolkit in Nuke and this video talking about the Loader app. The first video showed how to load external Asset publishes and update render passes, but there was only a passing mention of loading shot renders. In the second video I could see in the filters a publish type called Rendered Image. I thought that maybe the loader was already setup to receive rendered frame publishes by default in Nuke.

So I took a look at the default configuration of the Nuke engine, and sure enough I came across this line in the shot_step.yml:

Rendered Image in Nuke engine configuration

It appeared as though the multi loader app running inside of Nuke would accept PublishedFiles of type Rendered Image, and would probably create Nuke Read nodes from them. To double check, I did a quick search through the multi loader code to see what the behavior was. I found the description of the read_node action in the Nuke actions hook:

read_node action as defined in the loader

Bingo! Confirmed. That was exactly what we needed. The next step was to work backwards, and get some Rendered Image PublishedFiles created to feed into Nuke.

Rendering and Templates

Before I went any further, I needed to answer a couple of workflow questions:

  • Where would the frames end up on disk? 
  • How would the artists be publishing?

One of the support tickets I was responding to when writing up this example wanted to include the render layer name and camera name in the path to the frames on disk. Their studio was initiating renders from within maya and wanted to publish from there as well. So those requirements helped drive how I answered the questions, and how I decided to build the example that follows.

The answers to these questions will vary greatly across studios, so it’s important to keep in mind that what follows is just an example of one way to setup Rendered Image publishes to Nuke.

Publishing from within a DCC like Maya, where your render setup lives, has a couple of benefits. First, if the DCC has Toolkit support, you can leverage the Publisher app and customize the hooks to meet your needs. You’ll also have access to the cameras, render layers, and passes as the hooks execute; this makes identifying the rendered frames to publish fairly straight-forward.

One downside to publishing within the DCC is that the artist may not have the session open when the render completes. Perhaps their file takes a long time to load, or it could be that it’s just more convenient for them to publish from within some other working context. Maybe launching the DCC is just not an optimal solution for publishing in your studio. In these scenarios, you’ll probably find yourself writing your own method for discovery of the rendered frames to publish, but the underlying code will probably look similar to the multi publish hooks we’re going to override for this example.

The next step I took was to define a template path for the frames to be rendered. As I mentioned before, I wanted the camera and layer names in the path, so I added the following to the keys section in core/templates.yml within the tk-config-simple repo :

Maya camera and layer name template keys

Next I added the actual render template:

Simple template for rendered frames

I only defined a single template, but I can imagine a production facility having several different conventions for rendered frames. In that case, you’d need to create templates for each convention and replicate some of the logic you’ll see in the hooks below.

Publish hook modifications

To generate the Rendered Image publishes, I needed to create a new secondary publish type like we discussed in previous weeks with cameras, alembic caches, and shader networks. This required adding the definition of the new publish type to the secondary_outputs list in the maya engine block of the shot_step.yml environment config:

Secondary output definition in environment config

The secondary output definition for the Rendered Image type help drives how the rendered frames will be displayed in the publisher. The publish_template field is slightly misleading here since the frames will have already been rendered when the publisher is visible. This is unlike other secondary Maya publishes which use the template to drive where to write those additional files as the publish hooks execute. For this example, since I was only using one template for rendered images, I put it in there for easy access inside the hook.

The first hook that needed to be modified was the scan_scene hook. You can see the changes I made to it here. The purpose of this particular hook is to identify the things in the session that need to be published. So I needed to find out if there were any rendered frames on disk matching the maya_shot_render template.

The important thing I think I really started to understand while working with this code is how the templating system works. The logic I wrote was basically to iterate over the cameras and render layers in the maya session and evaluate the render template with those names. If there were frames on disk that matched the evaluated path, then I had something to publish! I found a method in the Toolkit API called abstract_paths_from_template that did all the hard work for me. Given a template and a dictionary of fields, it will returns matching paths. Once I had the paths I was looking for, I added them to the list of items to be published.

Something that was less than ideal was having to hard code the template names in the scan_scene hook. Ideally, the hook would have access to the publish_template setting defined in the app’s config, but I don’t think it does. That’s not sufficient anyway in the case where you have multiple render templates to evaluate. Perhaps you could pass in a list of templates via the setting, but I didn’t think to try it. Another idea would be to establish a convention for the names of your render path templates and key off of those to find the appropriate templates to use for publishing. Anyway, this is something to consider improving upon in your implementation.

One other personal point to make here is that there is a {version} key in the maya_shot_render template. I retrieved the version from the maya_shot_work file to populate in the render template. I don’t know if this is the best way to determine the version number in Toolkit, but I do like the fact that I can tie my rendered frames to a scene file. If my main render pass is version 003, then I know it was rendered from version 003 of my work file. Tying versions together like this is something I’ve always been a big proponent of.

Along those lines, how does your studio manage versions across workfiles, published files, etc? Do you have any custom logic that keeps versions in sync or ties versions together in some way? It would be awesome to hear if anyone has abstracted the concept of a version in a unique and interesting way.

Now that I had secondary items to publish, I needed to actually publish them. I didn’t add any custom validation for the Rendered Image publishes, but I would highly recommend it for real production pipelines. I stubbed it out in the secondary_pre_publish hook here and added some comments. Some suggestions for validation include:

  • All frames exist
  • File sizes are consistent
  • Permissions are correct
  • Secondary files exist (thumbnails or quicktimes for example)

And finally, I modified the publish code itself, which was relatively minimal for this example. In the secondary_publish hook all I really had to do was get the publish path from the other_params dictionary I supplied from the scan hook and populate the data needed to register the publish with Shotgun. As I mentioned before, the other Maya secondary publish types we’ve covered would be exported from the session to a new file on disk in this portion of the hook. I relied on the fact that the rendered images are already on disk to simply forward the path onto the registration code. In a true production studio, you might do additional processing here, such as setting file permissions, creating symlinks, etc.

Maya to Nuke

Now let’s look at how it all fits together. Here are some example frames on disk.

Rendered frames on disk

You can see the 2 layers on disk showing up as check boxes in the Publish app.

Secondary publish of rendered images

After I publish, I can see the PublishedFile show up in Shotgun.

Publish file in Shotgun

One thing that would make this much nicer would be to upload one of the frames as the thumbnail to use. This wouldn’t be too hard to whip up, I just didn’t get to it. My example just uses the thumbnail path that comes with the publish.

The great news is that I made absolutely no changes on the Nuke side. Once I had the published Rendered Images, they showed up immediately in the Nuke loader:

Rendered Images in the Nuke loader
I was able to use the Action button to create read nodes for each of the layers, and it worked great!


That’s as far as I took my example. I hope this post provided enough information to help some of you who might be considering building a Maya to Nuke handoff for your pipeline. There is certainly a lot of room for customization with Toolkit and within the publish hooks specifically; which is great in this particular case because every studio will have their own unique setup and their own unique requirements for handling rendered frames. As I said earlier, I’d love to hear about your experiences getting rendered images published from Maya and into Nuke (or any other software for that matter), so please let me know what you’ve been up to in the comments section.

This marks the halfway point in the series, and I wanted to say thank you all for sticking with us. As Jeff mentioned in last week’s post, we’re going to try to start steering away from the “here’s what we did” posts and move toward the “wouldn’t it be cool if” discussions. If there’s anything you all are interested in exploring, we’d love to hear about it.

Jeff teased this in a previous post, but I think, spec’ing out how to build a Product-based, Subscription pipeline within a Shotgun/Toolkit context would be a really fun exploration for a future post. I think Subscriptions are critical to getting immediate answers to some of the toughest, most common questions people ask in the middle of production. I also believe Subscriptions provide a key ingredient to building efficient multi-site pipelines. If this is something you’d be interested in exploring as well, please let us know.

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.