<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Data Source size limit?]]></title><description><![CDATA[<p dir="auto">Hi,</p>
<p dir="auto">We’re testing out using local JSON as a data source for our projects.  In Google Sheets, data is composed of about 150 columns, and can range anywhere from 1 to 750 records.  I’ve done a test with about 30 records which took a little while for the JSON to load (the file is loaded from the local hard drive that has the .aep) which rendered much more quickly than a Google Sheet due to minimal delay between renders.  I then ran a test with a file of about 500 records which never actually loaded, it took so long.  Would you expect this to be the case?  Is there a maximum number of key:value pairs and jobs that we should consider?  Thanks.</p>
<p dir="auto">Pete</p>
]]></description><link>https://forums.dataclay.com/topic/408/data-source-size-limit</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 21:12:46 GMT</lastBuildDate><atom:link href="https://forums.dataclay.com/topic/408.rss" rel="self" type="application/rss+xml"/><pubDate>Thu, 09 Apr 2026 18:59:56 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to Data Source size limit? on Fri, 10 Apr 2026 16:56:42 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/ariestav">@<bdi>ariestav</bdi></a> <a class="plugin-mentions-user plugin-mentions-a" href="/user/jeff">@<bdi>Jeff</bdi></a> Thanks for the info.  sounds like we’d be better off just using Que?!</p>
]]></description><link>https://forums.dataclay.com/post/1279</link><guid isPermaLink="true">https://forums.dataclay.com/post/1279</guid><dc:creator><![CDATA[pbretz]]></dc:creator><pubDate>Fri, 10 Apr 2026 16:56:42 GMT</pubDate></item><item><title><![CDATA[Reply to Data Source size limit? on Fri, 10 Apr 2026 15:20:26 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/pbretz">@<bdi>pbretz</bdi></a> Adding onto what <a class="plugin-mentions-user plugin-mentions-a" href="/user/jeff">@<bdi>Jeff</bdi></a> wrote, if you are going to have a single entry in the local JSON file, you’d want to use a <code>render-status</code> property key in the JSON object so that Bot would know when the over-written file was ready to be processed.</p>
]]></description><link>https://forums.dataclay.com/post/1278</link><guid isPermaLink="true">https://forums.dataclay.com/post/1278</guid><dc:creator><![CDATA[ariestav]]></dc:creator><pubDate>Fri, 10 Apr 2026 15:20:26 GMT</pubDate></item><item><title><![CDATA[Reply to Data Source size limit? on Fri, 10 Apr 2026 14:15:29 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/pbretz">@<bdi>pbretz</bdi></a></p>
<p dir="auto">The main issue with this scenario is the amount of memory After Effects allocates to ExtendScripts like Templater. Adobe doesn’t want third-party apps to have free rein over all a system’s memory, which means that loading a large Data Source, as you’ve described here, can be a bit unwieldy. The problem is that Templater has to open and read the entire dataset with these limited resources, which is causing a significant slowdown.</p>
<p dir="auto">To get around this, we’d need to feed Templater the data in smaller chunks. The easiest way to achieve this would probably be to use a JSON file with only one entry and then use an Event Script on the “After Output” Templater Event to update the JSON with the next record. That would probably be the most efficient method and should keep the time required to parse the JSON to a minimum.</p>
<p dir="auto">Another option would be to use a <a href="https://support.dataclay.com/templater/content/how_to/data/setting_up/setting_up_a_url_feed_as_a_data_source.htm" rel="nofollow ugc">custom JSON URL feed</a> to pass one record at a time to Templater for processing. This method also requires an Event Script to update the JSON record after each render, since Templater can’t write back to the URL feed to update the render-status. However, this can also be a very efficient way to pass data to Templater, as you can customize the URL feed internally.</p>
<p dir="auto">Hopefully, that all makes sense, but if you have any other questions, let us know.</p>
<p dir="auto">Thanks,</p>
<p dir="auto">Jeff</p>
]]></description><link>https://forums.dataclay.com/post/1277</link><guid isPermaLink="true">https://forums.dataclay.com/post/1277</guid><dc:creator><![CDATA[Jeff]]></dc:creator><pubDate>Fri, 10 Apr 2026 14:15:29 GMT</pubDate></item></channel></rss>