QC'ing approach to large renders
-
Hi,
Do you have any best practices approach to Quality Controlling batches of videos after the render process? We are rendering batches on the order of 1000 videos at a time and are dealing with an instance where videos have some errors that were not picked up on during generation of a subset prior to rendering the entire batch. Any guidance would be greatly appreciated.
Thanks.
Pete
-
Pete,
I can’t really think of any way to fully automate the process, but it might be possible to set up an after-output event script that uses FFMPEG to take some screenshots of the rendered video. You could use something like:
ffmpeg -i path/to/your-video-name.mp4 -ss 00:00:07.000 -vframes 1 path/to/your-video-thumb.jpg``` code_text
With the appropriate arguments to get some shots of the output that could be manually spot-checked. I’m not sure if there would be a way to automate the process fully, but this would at least allow someone to spot-check potential problem areas without having to scrub through the videos manually.
Hopefully, that helps. If I think of anything else, I’ll be sure to let you know.
Thanks,
Jeff
-
@pbretz During the transcode process, you can extract a frame of Templater’s output to an image file. If you output key frames of the video that have the data integrated, you can use your the operating system’s image preview feature to skim through thumbnails to see if data didn’t merge correctly. We call it a spot-check. In some cases, before running the batch, we’ll set the workarea of a target to 1 frame duration, then use a PNG Sequence output module to just output a single frame. Then if all looks good, we’ll run the batch.
-
With variable length compositions making up the timeline, is there any way to vary the frame(s) that will be exported based on the varying timeline?
-
@pbretz You can force set the workarea’s start and end time’s in the target during an After Update event is broadcast.
Check out the code here:
This basically runs ExtendScript to change the target’s work area after all Layout Rules and Time Sculpting logic is complete.
-
@pbretz Also, if you use the suggestion by @Jeff you could set the
-ss
time value for the spot frame as a string in your data source. Then pass the that value into the “After Job” event using the event script dollar-sign notation.As an example, make a column inside of a sheet, and call it “spot-check-time”. Then add the time values for each row.
Wherever you have your ffmpeg script calling from, you can use
$spot-check-time
to pass in the frame’s seconds value for that job.Does this help?
-
Thanks for the feed back guys. These are helpful. Our challenge is that our template has about 30 variable scenes on the timeline, driven by a data file with about 100 variables. The issue is a combination of math - how does one best test variable combinations that can arise from the data set, and are calculations correct - and the resulting video - do all the scenes show that should show, do all of the visuals display, etc. The thumbnail can help save time spot checking visuals in scenes. I think that can help us identify issues before rendering out videos which take anywhere from 2-4 minutes depending on the system we’re using to render.
-
@pbretz It’s probably difficult to do without some kind of math if your timeline is moving around a lot, but since you can pass data values to be arguments in an event script incantation, you could pass a time/timecode value into a script to extract key frames as stills using the above mentioned methods.