Two Guys and a Toolkit - Week 6: Dataflow and Workflow

Josh Tomlinson    ●    Oct 22, 2015

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!

Dataflow and Workflow

Hi everyone! Welcome back for part six of our series dedicated to building a simple pipeline using Toolkit.

Up to this point in the series, we’ve been looking at setting up and building the fundamental pieces of our simple pipeline. Here are links to the previous posts, just in case you missed anything:

  1. Introduction, Planning, & Toolkit Setup
  2. Configuration
  3. Publishing
  4. Grouping
  5. Publishing from Maya to Nuke

As always, please let us know if you have any questions or want to tell us how you’re using Toolkit in exciting and unique ways. We’ve had a really great response from you all in previous posts and we look forward to keeping that discussion going. So keep the feedback flowing!

This week we thought we’d talk about how all the pieces we’ve been building fit together and discuss the dataflow and artist experience within the context of our pipeline. As usual, we'll take a look at what bits of Toolkit worked well for us and which ones we think could be better. This will give us a solid foundation for the rest of the series as we transition into a discussion with you all about our pipeline philosophies and building more sophisticated workflows.

Workflow

Hey everyone, Josh here! One of the strengths of Toolkit, in my opinion, is that it exposes a common set of tools for every step of the pipeline. This means there is a common pipeline "language" that everyone on production speaks. If someone says, "you need to load version 5 of the clean foreground plate from prep", that means something significant whether you're in animation, lighting, or compositing, because you're all using the same toolset. The more your pipeline can avoid building step-specific workflows and handoff tools, the more flexible your pipeline will be. Now, obviously you have to be able to customize how the data flows between steps, but you should avoid hardwiring that into your pipeline in my opinion.
 
Since we’ve made a conscious effort to keep our pipeline simple, and because we like the fact that we have a consistent set of tools across all of our pipeline steps, we haven’t deviated much from the standard, out-of-the-box Toolkit apps.  So rather than analyzing the workflow at each step of the pipeline individually I think it might be more efficient to look at how the average artist working in the pipeline uses these tools.  I’ll also point out all the customizations we've made (most of which we've mentioned before), but hopefully, combined with the Dataflow section of this post, you’ll be able to put together a complete view of how the pipeline is meant to work and how the packaged Toolkit tools are used.

Loading Publishes

The Loader app is used by almost every step in the pipeline as a way of browsing and consuming upstream PublishedFiles. The loader has quite a few options for configuring the browsing and filtering experience for the user, which is really cool. And of course there are hooks to customize what happens when you select a publish to load.

PublishedFile loading

From a user standpoint, there seems to be a lot of clicking to get at what you’re actually interested in. Between drilling down into a context, filtering publishes, selecting a publish, and finally performing an action, you can rack up quite a few clicks. If you need publishes from more than one input context, you potentially have to start all over again. I think that users often know exactly what they want, and having the ability to type a name into a search widget might be more convenient. There is a search/filter widget in the UI, but it’s for limiting what shows up in the already-filtered view. It would be great to have a smart search that prioritized the returned publishes that were in the same Shot or Asset as the user’s current context.

I also found the filtered list of publish files difficult to parse visually. You can see in the screenshot above that the Loader is displaying PublishedFiles in a single list and they are sorted by name. As a user, I would love to be able to sort by task, version number, username, date, etc.

To me, the Loader is similar enough to a file browser that it is easy to notice where some of the common file browser features are missing. In addition to the sorting and filtering options, I noticed immediately that there were no buttons at the bottom of the UI. I was expecting at least a Cancel/Close button. What’s the general feedback you all get from artists using the Loader UI?

I also wonder how people know what publishes they need to load on production. Is this just a discussion people have with folks upstream (which is perfectly reasonable)? Or does your facility do anything special to track the “approved” publishes in Shotgun and relay that information to the artists somehow. Have you used the configuration capabilities of Loader to filter/show only publishes with a certain status, for example? It would also be interesting to spec out how we might use Shotgun to predict what a user might want or need for their context.

You may have noticed a “Show deprecated files” checkbox in the bottom-right corner of the Loader screenshot. That’s a really cool feature that was added by Jesse Emond, the Toolkit intern, who has been kicking some serious butt around here. We’ll give Jesse a formal introduction in a future post where he’ll be able to talk about deprecating PublishedFiles in our simple pipeline. So definitely be on the lookout for that!

We mentioned in a previous post that we customized the loader hooks to connect shaders and alembic caches as they’re imported. You can see that hacky little bit of code here. And here’s what it looks like in action:

Auto shader hookup on Alembic import

File Management

Next up is the Workfiles2 app which is in pre-release. This app is tasked with managing the files that artists are working in. Every step of our simple pipeline uses this app to open and save working files.

Saving the work file

The default interface, in Save mode, has fields for specifying a name, the version, and the extension of the working file. These fields are used to populate the template specified in the app’s configuration for the engine being used. In this example, the template looks like this:

The template being referenced in the workfiles config


The template itself

Not having to manually keep track of which version number to tack onto the file is a nice convenience, but I do wish Toolkit had a more robust concept of versioning. Right now, the user can manually create as many versions of a maya file as they want, which is great, but the version is isolated to that single file. My preference would be to have the version be a part of the work area context itself, and to have the state of the work area, including my main work file, any associated files I’m using, and all my upstream references, versioned together. In simple terms, I’d like to be able to get back to the state of my entire work area at any time on production. I’m getting a little ahead of myself though. I want to discuss this more in a future post but, in the meantime, definitely let us know how you handle versioning at your facility.

The Save dialog can be expanded as well:

Expanded File Save dialog

You’ll notice the similarities with, and reuse of, some of the UI elements from the Loader. The expanded view allows you to browse and save within a different context.

As mentioned, the workfiles app is also used for opening working files:

File Open dialog

As with the other views, you can browse to the context you’re interested in and find work files to open. I think the interface looks clean, but I still find myself wanting the ability to do more sophisticated searching and filtering across contexts. What do you all think?

Snapshotting 

The Snapshot app is a quick way to make personal backups of your working file.

Snapshot dialog

Artists can type a quick note, take a screenshot, and save the file into a snapshots area. The app also provides a way to browse old snapshots and restore them. It’s a simple tool to use and a nice feature to provide artists. I’d actually like to have comments and thumbnails attached to artists working files as well. I wonder if this functionality shouldn’t just be part of the workfile app’s Save/Open. Thoughts?

It would also be nice to have a setting and hook that allowed for auto-saving, or auto-snapshotting, the files. There could be a setting that limits the number of saves to keep, too. I realize this type of functionality exists in many DCCs already, but having a consistent interface for configuring and managing auto-saves across all pipeline steps and DCCs would be great.

Publishing

Publishing and the Publisher app is something we’ve referenced quite a bit about in previous posts, so we don’t need to go into too much detail here. We’ve shown some of the customizations we’ve made to the configs and hooks in maya. Here are the secondary exports we’ve mentioned in action:

Custom secondary publishes

Updating the Workfile

The Scene Breakdown app displays and updates out-of-date references and is used by all steps with upstream inputs. There is a hook that allows for customization of the discovery of referenced publishes in the file and determining which are out-of-date. There is also a hook to customize how to actually update the references. This makes the app, like all the Toolkit apps, very flexible and able to adapt to changes in pipeline requirements.


Breakdown of references in the file

The interface itself is fairly simple and easy to understand. I like being able to filter for what is out of date and bulk-update them. I do think there’s room for a discussion about combining the Breakdown and Loader apps into a single interface where you can see what you have in your file, what’s out-of-date, and what new things are available to reference. I’d also like to have the ability to lock out-of-date references if I know I don’t ever want to update them. This might be useful when a shot is close to final and you don’t want to risk the update.

One of the things we’ve teased is our upcoming discussion about a Subscription system to compliment Publishing. We’ll be talking about what it would mean to keep track of subscriptions at the context level and having a Breakdown-like interface that allows you to manage inputs across DCCs. I won’t go into more detail right now, but definitely check back in for that post.

Dataflow

Hey everyone, Jeff here! I’m going to give you guys a high-level view of how data flows through our pipeline, plus some ideas about what it would look like if we added a couple more stages of production that are likely to exist in your studio. There isn’t anything here that we’ve not talked about in detail in previous posts, but what it does do is show everything together from 10,000 feet, so to speak.

What We Have

Let’s take a look at what Josh and I have up and running in our pipeline.


Pretty simple, right? As we discussed in the first post of this series, we’ve limited ourselves to the bare minimum number of pipeline steps for a fully-functioning pipeline. Something else that’s important to discuss is that we’ve focused on limiting the types of data we’re making use of. From the above diagram you can see that our outputs are limited to Alembic caches, Maya scene files, and image sequences. By utilizing Alembic in all places where it’s suitable, we cover much of the pipeline’s data flow requirements with a single data type. This is great from a developer’s point of view, because it means we’re able to reuse quite a bit of code when it comes to how to import that data into other DCC applications. In the end, the only places in the pipeline where we’re required to publish DCC-specific data are for our shader networks and rigs. These components are tightly bound to Maya, and such need to remain in that format as they flow down the pipeline.

What Could Be

If we expand our pipeline out to cover live-action plates, which are obviously a requirement in any live-action visual effects pipeline, we add a bit of complexity.


You’ll notice, though, that we have not added any additional TYPES of published data. We have more Alembic, which will contain the tracked camera plus any tracking geometry that needs to flow into the Layout step of the pipeline, plus the image sequence itself that comprise the plates for the shot. In the end, we’ve added very little complexity to the platform underneath the pipeline in order to support the additional workflow elements.

We can expand this further by adding in an FX step.

I will fully admit that this is an optimistic amount of simplicity when it comes to FX output data. It’s entirely possible that the list of output file types could expand beyond what is shown here, as simulation data, complex holdouts, and any number of other FX-centric data could come into play. However, the basics can still be covered by everything we’ve already supported. Final elements rendered and passed to a compositor, or cached geometry sent to lighting for rendering.

Conclusion

That’s it for this week! At the end of the series we’ll be putting together a tutorial that shows you how to get our simple pipeline up and running. So if you still have questions about how things work, that should help fill in the blanks.

Like we always say, please let us know what you think! We love the feedback and are ready to learn from you all, the veterans, what it’s like to build Toolkit workflows on real productions. The more we hear from you, the more we learn and the more prepared we’ll be for supporting you down the road.

Next week we’re planning on diving into the realm of Toolkit metaphysics - or at least more abstract ideas about pipeline. If you have strong opinions or philosophies about how production pipelines should work, we’ll look for you in the comments! Have a great week everyone!

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.