Skip to content

Commit

Permalink
Fix typos using codespell (#1288)
Browse files Browse the repository at this point in the history
  • Loading branch information
ianthomas23 authored Oct 12, 2023
1 parent 8221dd0 commit b0e9a32
Show file tree
Hide file tree
Showing 16 changed files with 66 additions and 59 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -724,7 +724,7 @@ Minor bugfix release to support Bokeh 0.12:
Version 0.3.0 (2016-06-23)
--------------------------

The major feature of this release is support of raster data via ``Canvas.raster``. To use this feature, you must install the optional dependencies via ``conda install rasterio scikit-image``. Rasterio relies on ``gdal`` whose conda package has some known bugs, including a missing dependancy for ``conda install krb5``. InteractiveImage in this release requires bokeh 0.11.1 or earlier, and will not work with bokeh 0.12.
The major feature of this release is support of raster data via ``Canvas.raster``. To use this feature, you must install the optional dependencies via ``conda install rasterio scikit-image``. Rasterio relies on ``gdal`` whose conda package has some known bugs, including a missing dependency for ``conda install krb5``. InteractiveImage in this release requires bokeh 0.11.1 or earlier, and will not work with bokeh 0.12.

- **PR #160 #187** Improved example notebooks and dashboard
- **PR #186 #184 #178** Add datashader-download-data cli command for grabbing example datasets
Expand Down
2 changes: 1 addition & 1 deletion datashader/colors.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ def rgb(x):
# Adapted from matplotlib.cm.hot to be more uniform at the high end
Hot = ["black", "maroon", "darkred", "red", "orangered", "darkorange", "orange", "gold", "yellow", "white"]

# pseudo terrestial elevation ramp
# pseudo terrestrial elevation ramp
Elevation = ["aqua", "sandybrown", "limegreen", "green", "green", "darkgreen", "saddlebrown", "gray", "white"]

# Qualitative color maps, for use in colorizing categories
Expand Down
2 changes: 1 addition & 1 deletion datashader/datashape/discovery.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def discover(obj, **kwargs):
warn(
dedent(
"""\
array-like discovery is deperecated.
array-like discovery is deprecated.
Please write an explicit discover function for type '%s'.
""" % type_name,
),
Expand Down
2 changes: 1 addition & 1 deletion datashader/datashape/type_symbol_table.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def _complex(tp):
return ct.complex_float64
else:
raise TypeError(
'Cannot contruct a complex type with real component %s' % tp)
'Cannot construct a complex type with real component %s' % tp)


def _struct(names, dshapes):
Expand Down
6 changes: 3 additions & 3 deletions datashader/datatypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def _validate_ragged_properties(start_indices, flat_array):
flat_array: numpy array containing concatenation
of all nested arrays to be represented
by this ragged array
start_indices: unsiged integer numpy array the same
start_indices: unsigned integer numpy array the same
length as the ragged array where values
represent the index into flat_array where
the corresponding ragged array element
Expand Down Expand Up @@ -231,7 +231,7 @@ def __init__(self, data, dtype=None, copy=False):
- flat_array: numpy array containing concatenation
of all nested arrays to be represented
by this ragged array
- start_indices: unsiged integer numpy array the same
- start_indices: unsigned integer numpy array the same
length as the ragged array where values
represent the index into flat_array where
the corresponding ragged array element
Expand Down Expand Up @@ -385,7 +385,7 @@ def flat_array(self):
@property
def start_indices(self):
"""
unsiged integer numpy array the same length as the ragged array where
unsigned integer numpy array the same length as the ragged array where
values represent the index into flat_array where the corresponding
ragged array element begins
Expand Down
6 changes: 3 additions & 3 deletions datashader/reductions.py
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@ def _finalize(bases, cuda=False, **kwargs):
class SelfIntersectingOptionalFieldReduction(OptionalFieldReduction):
"""
Base class for optional field reductions for which self-intersecting
geometry may or may not be desireable.
geometry may or may not be desirable.
Ignored if not using antialiasing.
"""
def __init__(self, column=None, self_intersect=True):
Expand Down Expand Up @@ -946,8 +946,8 @@ def _combine(aggs):

class SelfIntersectingFloatingReduction(FloatingReduction):
"""
Base class fo floating reductions for which self-intersecting geometry
may or may not be desireable.
Base class for floating reductions for which self-intersecting geometry
may or may not be desirable.
Ignored if not using antialiasing.
"""
def __init__(self, column=None, self_intersect=True):
Expand Down
2 changes: 1 addition & 1 deletion datashader/tests/test_antialias.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@
# line whereas for 006 it is a multi-segment line, and each vertex is listed
# only a single time. Datashader then "connects the dots" as it were.
#
# Test 007 tests the edge case, where we draw an almost staright line between
# Test 007 tests the edge case, where we draw an almost straight line between
# corners with only a single pixel offset. This is to ensure that anti-aliasing
# does not try to draw pixels that are out of bounds. Importantly, this needs
# to be run with Numba disabled, since Numba does not do OOB checking by
Expand Down
4 changes: 2 additions & 2 deletions datashader/tests/test_datatypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ def test_validate_ragged_array_fastpath():
RaggedArray(dict(valid_dict, start_indices=25))
ve.match('start_indices property of a RaggedArray')

# not unsiged int
# not unsigned int
with pytest.raises(ValueError) as ve:
RaggedArray(dict(valid_dict,
start_indices=start_indices.astype('float32')))
Expand Down Expand Up @@ -148,7 +148,7 @@ def test_validate_ragged_array_fastpath():


def test_start_indices_dtype():
# The start_indices dtype should be an unsiged int that is only as large
# The start_indices dtype should be an unsigned int that is only as large
# as needed to handle the length of the flat array

# Empty
Expand Down
2 changes: 1 addition & 1 deletion datashader/transfer_functions/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -672,7 +672,7 @@ def shade(agg, cmap=["lightblue", "darkblue"], color_key=Sets1to3,
in proportion to how much each category contributes to the
final sum. However, if values can be negative or if they are
on an interval scale where values e.g. twice as far from zero
are not twice as high (such as temperature in Farenheit), then
are not twice as high (such as temperature in Fahrenheit), then
you will need to provide a suitable baseline value for use in
calculating color mixing. A value of None (the default) means
to take the minimum across the entire aggregate array, which
Expand Down
14 changes: 7 additions & 7 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

The best way to understand how Datashader works is to try out our
extensive set of examples. [Datashader.org](http://datashader.org)
includes static versions of the
[getting started guide](http://datashader.org/getting_started),
includes static versions of the
[getting started guide](http://datashader.org/getting_started),
[user manual](http://datashader.org/user_guide), and
[topic examples](http://datashader.org/topics), but for the full
experience with dynamic updating you will need to install them on a
live server.
live server.

These instructions assume you are using
[conda](https://conda.io/docs/install/quick.html), but they can be
adapted as needed to use [pip](https://pip.pypa.io/en/stable/installing/)
These instructions assume you are using
[conda](https://conda.io/docs/install/quick.html), but they can be
adapted as needed to use [pip](https://pip.pypa.io/en/stable/installing/)
and [virtualenv](https://virtualenv.pypa.io) if desired.

To get started, first go to your home directory and
Expand Down Expand Up @@ -71,7 +71,7 @@ jupyter notebook

If you want the generated notebooks to work without an internet connection or
with an unreliable connection (e.g. if you see `Loading BokehJS ...` but never
`BokehJS sucessfully loaded`), then restart the Jupyter notebook server using:
`BokehJS successfully loaded`), then restart the Jupyter notebook server using:

```
BOKEH_RESOURCES=inline jupyter notebook --NotebookApp.iopub_data_rate_limit=100000000
Expand Down
2 changes: 1 addition & 1 deletion examples/filetimes.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
from datashader import transfer_functions as tf

#from multiprocessing.pool import ThreadPool
#dask.set_options(pool=ThreadPool(3)) # select a pecific number of threads
#dask.set_options(pool=ThreadPool(3)) # select a specific number of threads
from dask import distributed

# Toggled by command-line arguments
Expand Down
8 changes: 4 additions & 4 deletions examples/tiling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@
" xs = np.concatenate([np.random.wald(10000000, 10000000, size=10000000) * offset for offset in xoffsets])\n",
" ys = np.concatenate([np.random.wald(10000000, 10000000, size=10000000) * offset for offset in yoffsets])\n",
" df = pd.DataFrame(dict(x=xs, y=ys))\n",
" \n",
"\n",
" return df.loc[df['x'].between(*x_range) & df['y'].between(*y_range)]"
]
},
Expand Down Expand Up @@ -152,7 +152,7 @@
"metadata": {},
"source": [
"### Create `post_render_func`\n",
"- accepts `img `, `extras` arguments which correspond to the output PIL.Image before it is write to disk (or S3), and addtional image properties.\n",
"- accepts `img `, `extras` arguments which correspond to the output PIL.Image before it is write to disk (or S3), and additional image properties.\n",
"- returns image `(PIL.Image)`\n",
"- this is a good place to run any non-datashader-specific logic on each output tile."
]
Expand Down Expand Up @@ -238,7 +238,7 @@
"\n",
"xmin, ymin, xmax, ymax = full_extent_of_data\n",
"\n",
"p = figure(width=800, height=800, \n",
"p = figure(width=800, height=800,\n",
" x_range=(int(-20e6), int(20e6)),\n",
" y_range=(int(-20e6), int(20e6)),\n",
" tools=\"pan,wheel_zoom,reset\")\n",
Expand Down Expand Up @@ -338,7 +338,7 @@
"source": [
"xmin, ymin, xmax, ymax = full_extent_of_data\n",
"\n",
"p = figure(width=800, height=800, \n",
"p = figure(width=800, height=800,\n",
" x_range=(int(-20e6), int(20e6)),\n",
" y_range=(int(-20e6), int(20e6)),\n",
" tools=\"pan,wheel_zoom,reset\")\n",
Expand Down
16 changes: 8 additions & 8 deletions examples/user_guide/1_Plotting_Pitfalls.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, it is very difficult to find settings for the dotsize and alpha parameters that correctly reveal the data, even for relatively small and obvious datasets like these. With larger datasets with unknown contents, it is difficult to detect that such problems are occuring, leading to false conclusions based on inappropriately visualized data.\n",
"As you can see, it is very difficult to find settings for the dotsize and alpha parameters that correctly reveal the data, even for relatively small and obvious datasets like these. With larger datasets with unknown contents, it is difficult to detect that such problems are occurring, leading to false conclusions based on inappropriately visualized data.\n",
"\n",
"### 3. Undersampling\n",
"\n",
Expand All @@ -187,7 +187,7 @@
" np.random.seed(1)\n",
" dists = [(np.random.normal(x,s,num), np.random.normal(y,s,num)) for x,y,s in specs]\n",
" return np.hstack([d[0] for d in dists]), np.hstack([d[1] for d in dists])\n",
" \n",
"\n",
"points = (hv.Points(gaussians(num=600), label=\"600 points\", group=\"Small dots\") +\n",
" hv.Points(gaussians(num=60000), label=\"60000 points\", group=\"Small dots\") +\n",
" hv.Points(gaussians(num=600), label=\"600 points\", group=\"Tiny dots\") +\n",
Expand Down Expand Up @@ -221,7 +221,7 @@
" \"\"\"\n",
" Given a set of coordinates, bins them into a 2d histogram grid\n",
" of the specified size, and optionally transforms the counts\n",
" and/or compresses them into a visible range starting at a \n",
" and/or compresses them into a visible range starting at a\n",
" specified offset between 0 and 1.0.\n",
" \"\"\"\n",
" hist,xs,ys = np.histogram2d(coords[0], coords[1], bins=bins)\n",
Expand Down Expand Up @@ -354,7 +354,7 @@
"except ImportError:\n",
" eq_hist = lambda d,m: d\n",
" print(\"scikit-image not installed; skipping histogram equalization\")\n",
" \n",
"\n",
"hv.Layout([heatmap(dist,bins,transform=eq_hist) for bins in [8,20,200]])"
]
},
Expand Down Expand Up @@ -410,10 +410,10 @@
"metadata": {},
"outputs": [],
"source": [
"layout = (hv.Points(dist,label=\"1. Overplotting\") + \n",
" hv.Points(dist,label=\"2. Oversaturation\").opts(s=0.1,alpha=0.5) + \n",
" hv.Points((dist[0][::200],dist[1][::200]),label=\"3. Undersampling\").opts(s=2,alpha=0.5) + \n",
" hv.Points(dist,label=\"4. Undersaturation\").opts(s=0.01,alpha=0.05) + \n",
"layout = (hv.Points(dist,label=\"1. Overplotting\") +\n",
" hv.Points(dist,label=\"2. Oversaturation\").opts(s=0.1,alpha=0.5) +\n",
" hv.Points((dist[0][::200],dist[1][::200]),label=\"3. Undersampling\").opts(s=2,alpha=0.5) +\n",
" hv.Points(dist,label=\"4. Undersaturation\").opts(s=0.01,alpha=0.05) +\n",
" heatmap(dist,200,offset=0.2,label=\"5. Underutilized dynamic range\") +\n",
" heatmap(dist,200,transform=eq_hist,label=\"6. Nonuniform colormapping\").opts(cmap=\"hot\"))\n",
"\n",
Expand Down
8 changes: 4 additions & 4 deletions examples/user_guide/3_Timeseries.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"noise = lambda var, bias, n: np.random.normal(bias, var, n)\n",
"data = {c: signal + noise(1, 10*(np.random.random() - 0.5), n) for c in cols}\n",
"\n",
"# Add some \"rogue lines\" that differ from the rest \n",
"# Add some \"rogue lines\" that differ from the rest\n",
"cols += ['x'] ; data['x'] = signal + np.random.normal(0, 0.02, size=n).cumsum() # Gradually diverges\n",
"cols += ['y'] ; data['y'] = signal + noise(1, 20*(np.random.random() - 0.5), n) # Much noisier\n",
"cols += ['z'] ; data['z'] = signal # No noise at all\n",
Expand Down Expand Up @@ -174,7 +174,7 @@
"metadata": {},
"outputs": [],
"source": [
"cvs2 = ds.Canvas(x_range=(12879023 * 1E11, 12879070 * 1E11), \n",
"cvs2 = ds.Canvas(x_range=(12879023 * 1E11, 12879070 * 1E11),\n",
" y_range=(37, 50), plot_height=200, plot_width=500)\n",
"\n",
"w0 = tf.shade(cvs2.line(df, 'ITime', 'a', line_width=0), name=\"line_width 0\")\n",
Expand Down Expand Up @@ -425,7 +425,7 @@
" np.random.normal(0, 0.3, size=n).cumsum() + 50,\n",
" np.random.normal(0, 0.3, size=n).cumsum() + 50]\n",
"data = {c: signals[i%3] + noise(1+i, 5*(np.random.random() - 0.5), n) for (i,c) in enumerate(cols)}\n",
"y_range = (1.2*min([s.min() for s in signals]), 1.2*max([s.max() for s in signals])) \n",
"y_range = (1.2*min([s.min() for s in signals]), 1.2*max([s.max() for s in signals]))\n",
"\n",
"data['Time'] = df['Time']\n",
"dfm = pd.DataFrame(data)"
Expand Down Expand Up @@ -525,7 +525,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, each line represents an independent trial of this random walk process. All lines start from the same point at time 0 (all the way to the left). At each subsquent time step, each line moves upward or downward from its prior position by a distance drawn from a normal distribution. Thanks to the nonlinear `eq-hist` colorization, you can see the dispersion in the density of the overall distribution as time advances, at the same time as you can see the individual outliers at the extremes of the distribution. You'll see a similar plot for 1,000,000 or 10,000,000 curves, and much more interesting plots if you have real data to show!"
"Here, each line represents an independent trial of this random walk process. All lines start from the same point at time 0 (all the way to the left). At each subsequent time step, each line moves upward or downward from its prior position by a distance drawn from a normal distribution. Thanks to the nonlinear `eq-hist` colorization, you can see the dispersion in the density of the overall distribution as time advances, at the same time as you can see the individual outliers at the extremes of the distribution. You'll see a similar plot for 1,000,000 or 10,000,000 curves, and much more interesting plots if you have real data to show!"
]
}
],
Expand Down
Loading

0 comments on commit b0e9a32

Please sign in to comment.