-
When one has an executable working on some file
an obvious means of doing causal benchmarking is to add an progress point before the end of the program and loop the execution of the program via some other mechanism (for example an bash script). For long running programs this means that coz will take longer to learn anything about the program behavior. If the algorithm is of this structure:
on can insert a progress point in the loop and can accept that with enough samples coz can estimate the importance of speeding up certain components for the complete loop.
The more similar the optimization profiles of However there are also programs of this type:
Moving the progress points into either while loop means the number of progress points per complete run is no longer fixed. Creating "perverse incentives" such as:
How would one go about using |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I don't exactly understand the issues, which might be based on a misunderstanding. Coz works by identifying the effect of any (virtual) speedups on the rate of execution of progress points. There's no "incentive" per se; it just measures the effect of a speedup on the progress point (or, for latency, the time between the start and end progress points). The total number of progress points executed per run is immaterial, except that you'd rather have more than fewer so you don't have to run the code for as long or run it too many times. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
Since i can't get the tooling working i wrote up a simulation of the effect i mean.
For each array of snapshots i process them as such: