Skip to content
This repository has been archived by the owner on Apr 23, 2018. It is now read-only.

Use random data to reduce chance of compression impacting results #53

Open
pstoll opened this issue Sep 15, 2016 · 0 comments
Open

Use random data to reduce chance of compression impacting results #53

pstoll opened this issue Sep 15, 2016 · 0 comments

Comments

@pstoll
Copy link

pstoll commented Sep 15, 2016

Currently this package is generating data using a very small, fixed set of data. If there is any compression being done (e.g. if there is a web server in front of this that does gzip compression) , the content is going to compress incredibly well. Which is going to work against doing accurate characterization of the bandwidth or latency.

To combat this and make the bandwidth estimate more robust, the returned data should contain random data.

Consider something using something like this:

Let me know if this is of interest and I could pull together a PR.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant