Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Randomization scares me a bit, but one can run several copies at the same time to get a better estimate, I guess. I like how you can easily obtain the granularity of an estimate after stopping, 2k is increment size after kth round.
I wonder, what are error distributions and how probable it is to not exceed 2k of an error, maybe I should read the article, after all 😅
Thank you for an excerpt
Edit: looks like if we have (ε, δ)-approximation if distribution of data, error would be less than δ/4
Randomization scares me a bit, but one can run several copies at the same time to get a better estimate, I guess. I like how you can easily obtain the granularity of an estimate after stopping, 2k is increment size after kth round.
I wonder, what are error distributions and how probable it is to not exceed 2k of an error, maybe I should read the article, after all 😅
Thank you for an excerpt
Edit: looks like if we have (ε, δ)-approximation if distribution of data, error would be less than δ/4