<div class="gmail_quote">On Thu, Dec 17, 2009 at 1:15 PM, Jonathan Hoskin <span dir="ltr"><<a href="mailto:jonathan.hoskin@gmail.com">jonathan.hoskin@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="gmail_quote"><div class="im"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><br>
> Is there any way to cache the downloaded program data, rather than<br>
> re-downloading it each time, even if it hasn't changed? I noticed the<br>
> --cache option in the help, but how do I use that with myth?<br>
<br>
</div>Yeah unfortunately you can't currently. I implemented the cache<br>
functionality as per the XMLTV spec but I don't think much uses it<br>
currently.</blockquote><div><br></div></div><div>How about dropping a Squid in front of the requesting machine to cache the response from Had's server? And then use a time-appropriate refresh_pattern directive for the URL in the Squid conf?</div>
<div><br></div><div>I have an almost default deploy of Squid on a Ubuntu 9.10 server on my LAN and caching of the <a href="http://nice.net.nz" target="_blank">nice.net.nz</a> xml just works.</div></div></blockquote><div>
<br>I don't think caching the entire file is what Robert's looking for. The issue is that on any given day 80% of the file is likely to be unchanged (assuming it contains approx. 5 days of data - only the newest day of programs will be different). A squid setup is still going to download the whole file each day, rather than just the 20% that's new.<br>
<br>Cheers,<br>Steve<br></div><div><br> </div></div><br>