I've used Make in a similar context: building documents.
Raw simulation results (.log) ->
Processed for plotting (.plot) ->
Ugly Fig files (.x.fig) ->
Pretty Fig files (.fig) ->
EPS files (.eps) ->
The final document (.pdf).
By including the right dependencies in there, you can have individual figures update themselves when the raw data changes, and whole swathes of charts update themselves when the 'fixer' scripts get updated.
I've done this for a large scientific data reduction task. Each operation at the level of one month needed to be repeated several times to dial in parameters for tossing junk data.
Once the per-month operations were done, all the results were combined into various plots and html pages. There were about 120 months, and running one month took several CPU hours.
I put it all in a makefile. It saved a huge amount of time to not have to repeat all the data reduction (for each month) whenever the plots and subsequent analysis needed to be updated due to an underlying parameter change for a few months' data. I could run make and know that all the per-month changes would roll up correctly to the per-year and overall summaries.
Also, the -j argument to make handled parallelizing the data reduction at zero cost to me.
I used Make, inotifytools, pdflatex (or similar), "xpdf -remote", and $EDITOR to get an almost-instantly-updated view of papers and my thesis, including all figures and illustrations.
When any of the dependencies of the final output changed, inotifywait noticed and kicked off a "make", and then notified the xpdf instance to refresh itself. xpdf was nice enough to try to stay on the same page when it did this.
Raw simulation results (.log) -> Processed for plotting (.plot) -> Ugly Fig files (.x.fig) -> Pretty Fig files (.fig) -> EPS files (.eps) -> The final document (.pdf).
By including the right dependencies in there, you can have individual figures update themselves when the raw data changes, and whole swathes of charts update themselves when the 'fixer' scripts get updated.