Skip to content

Commit

Permalink
Deploying to gh-pages from @ 5d9d660 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
holl- committed Aug 3, 2024
1 parent 83514ef commit c9ecbfb
Show file tree
Hide file tree
Showing 10 changed files with 85 additions and 85 deletions.
6 changes: 3 additions & 3 deletions Advantages_Data_Types.html
Original file line number Diff line number Diff line change
Expand Up @@ -15151,10 +15151,10 @@ <h1 id="Why-%CE%A6ML-has-Precision-Management">Why &#934;<sub>ML</sub> has Preci


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-08-03 18:17:24.645775: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-03 18:17:24.683566: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
<pre>2024-08-03 18:51:20.373509: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-08-03 18:51:20.411625: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-03 18:17:25.451024: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-08-03 18:51:21.176404: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion Convert.html
Original file line number Diff line number Diff line change
Expand Up @@ -15617,7 +15617,7 @@ <h2 id="Converting-Tensors">Converting Tensors<a class="anchor-link" href="#Conv


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-08-03 18:17:37.023984: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-08-03 18:51:32.753942: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion Examples.html
Original file line number Diff line number Diff line change
Expand Up @@ -15215,7 +15215,7 @@ <h3 id="Training-an-MLP">Training an MLP<a class="anchor-link" href="#Training-a


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-08-03 18:17:52.619976: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-08-03 18:51:48.070782: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion Introduction.html
Original file line number Diff line number Diff line change
Expand Up @@ -15349,7 +15349,7 @@ <h2 id="Usage-without-%CE%A6ML's-Tensors">Usage without &#934;<sub>ML</sub>'s Te


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-08-03 18:18:09.812794: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-08-03 18:52:04.768523: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
14 changes: 7 additions & 7 deletions Linear_Solves.html
Original file line number Diff line number Diff line change
Expand Up @@ -16021,13 +16021,13 @@ <h2 id="Obtaining-Additional-Information-about-a-Solve">Obtaining Additional Inf


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>factor_ilu: auto-selecting iterations=1 (eager mode) for matrix <span class="ansi-blue-intense-fg">(2.000, 0.000)</span>; <span class="ansi-blue-intense-fg">(0.000, 1.000)</span> <span class="ansi-green-intense-fg">(b_vecᶜ=2, ~b_vecᵈ=2)</span> (DEBUG), 2024-08-03 18:18:25,561n
<pre>factor_ilu: auto-selecting iterations=1 (eager mode) for matrix <span class="ansi-blue-intense-fg">(2.000, 0.000)</span>; <span class="ansi-blue-intense-fg">(0.000, 1.000)</span> <span class="ansi-green-intense-fg">(b_vecᶜ=2, ~b_vecᵈ=2)</span> (DEBUG), 2024-08-03 18:52:19,958n

TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False)] (DEBUG), 2024-08-03 18:18:25,576n
TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False)] (DEBUG), 2024-08-03 18:52:19,973n

Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;,) containing 1 native tensors (DEBUG), 2024-08-03 18:18:25,576n
Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;,) containing 1 native tensors (DEBUG), 2024-08-03 18:52:19,973n

Performing linear solve scipy-CG with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-08-03 18:18:25,580n
Performing linear solve scipy-CG with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-08-03 18:52:19,977n

</pre>
</div>
Expand Down Expand Up @@ -16155,11 +16155,11 @@ <h2 id="Linear-Solves-with-Native-Tensors">Linear Solves with Native Tensors<a c


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False), ((2, 2), False)] (DEBUG), 2024-08-03 18:18:25,603n
<pre>TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False), ((2, 2), False)] (DEBUG), 2024-08-03 18:52:20,000n

Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;, &#39;matrix&#39;) containing 2 native tensors (DEBUG), 2024-08-03 18:18:25,604n
Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;, &#39;matrix&#39;) containing 2 native tensors (DEBUG), 2024-08-03 18:52:20,000n

Performing linear solve auto with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-08-03 18:18:25,608n
Performing linear solve auto with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-08-03 18:52:20,004n

</pre>
</div>
Expand Down
18 changes: 9 additions & 9 deletions N_Dimensional.html
Original file line number Diff line number Diff line change
Expand Up @@ -15250,7 +15250,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-blue-intense-fg">(0.656, 0.322, 0.769, 0.413, 0.392)</span> along <span class="ansi-green-intense-fg">xˢ</span></pre>
<pre><span class="ansi-blue-intense-fg">(0.360, 0.426, 0.706, 0.334, 0.464)</span> along <span class="ansi-green-intense-fg">xˢ</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15289,7 +15289,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-blue-intense-fg">0.482 ± 0.110</span> <span class="ansi-white-fg">(3e-01...7e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-blue-intense-fg">0.619 ± 0.056</span> <span class="ansi-white-fg">(5e-01...7e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15328,7 +15328,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-blue-intense-fg">0.493 ± 0.119</span> <span class="ansi-white-fg">(1e-01...9e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-blue-intense-fg">0.502 ± 0.119</span> <span class="ansi-white-fg">(1e-01...9e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15479,7 +15479,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-blue-intense-fg">((2.55192+0j), (-0.25910917-0.45941594j), (-0.54797715-0.46981138j), (-0.54797715+0.46981138j), (-0.25910917+0.45941594j))</span> along <span class="ansi-green-intense-fg">xˢ</span> <span class="ansi-yellow-intense-fg">complex64</span></pre>
<pre><span class="ansi-blue-intense-fg">((2.2901797+0j), (-0.6668361-0.5918753j), (0.04670691-0.46417135j), (0.04670691+0.46417135j), (-0.6668361+0.5918753j))</span> along <span class="ansi-green-intense-fg">xˢ</span> <span class="ansi-yellow-intense-fg">complex64</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15518,7 +15518,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 4.33687686920166</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 5.570593357086182</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15557,7 +15557,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 2019.8148193359375</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 2054.177978515625</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15672,7 +15672,7 @@ <h2 id="Dimensions-as-Components">Dimensions as Components<a class="anchor-link"


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.327 ± 0.303</span> <span class="ansi-white-fg">(0e+00...9e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.376 ± 0.323</span> <span class="ansi-white-fg">(0e+00...8e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15711,7 +15711,7 @@ <h2 id="Dimensions-as-Components">Dimensions as Components<a class="anchor-link"


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.307 ± 0.219</span> <span class="ansi-white-fg">(0e+00...7e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.293 ± 0.239</span> <span class="ansi-white-fg">(0e+00...7e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15750,7 +15750,7 @@ <h2 id="Dimensions-as-Components">Dimensions as Components<a class="anchor-link"


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.535 ± 0.350</span> <span class="ansi-white-fg">(0e+00...1e+00)</span></pre>
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.544 ± 0.347</span> <span class="ansi-white-fg">(0e+00...1e+00)</span></pre>
</div>

</div>
Expand Down
16 changes: 8 additions & 8 deletions Networks.html

Large diffs are not rendered by default.

34 changes: 17 additions & 17 deletions Performance.html
Original file line number Diff line number Diff line change
Expand Up @@ -15212,8 +15212,8 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>Φ-ML + torch JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.21197349</span>
Φ-ML + torch execution average: 0.03549956902861595 +- 0.004727165214717388
<pre>Φ-ML + torch JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.20342584</span>
Φ-ML + torch execution average: 0.03442099690437317 +- 0.0051096719689667225
</pre>
</div>
</div>
Expand All @@ -15233,8 +15233,8 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>Φ-ML + jax JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.1617278</span>
Φ-ML + jax execution average: 0.012110414914786816 +- 0.0008309065597131848
<pre>Φ-ML + jax JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.1580026</span>
Φ-ML + jax execution average: 0.011942562647163868 +- 0.0009103829506784678
</pre>
</div>
</div>
Expand All @@ -15244,7 +15244,7 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-08-03 18:19:04.275435: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-08-03 18:52:57.862299: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand All @@ -15254,8 +15254,8 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>Φ-ML + tensorflow JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">13.906091</span>
Φ-ML + tensorflow execution average: 0.053461670875549316 +- 0.00282722688280046
<pre>Φ-ML + tensorflow JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">13.694691</span>
Φ-ML + tensorflow execution average: 0.05257116258144379 +- 0.0016889951657503843
</pre>
</div>
</div>
Expand Down Expand Up @@ -15361,8 +15361,8 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>jax JIT compilation: 0.1335396100000139
jax execution average: 0.010147904414141366
<pre>jax JIT compilation: 0.13146938599999203
jax execution average: 0.010196452373737267
</pre>
</div>
</div>
Expand Down Expand Up @@ -15443,11 +15443,11 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>/tmp/ipykernel_2746/3571425526.py:12: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
<pre>/tmp/ipykernel_2692/3571425526.py:12: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
dist = torch.sqrt(torch.maximum(torch.sum(deltas ** 2, -1), torch.tensor(1e-4))) # eps=1e-4 to avoid NaN during backprop of sqrt
/tmp/ipykernel_2746/3571425526.py:20: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
/tmp/ipykernel_2692/3571425526.py:20: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
x_inc_contrib = torch.sum(torch.where(has_impact.unsqueeze(-1), torch.minimum(impact_time.unsqueeze(-1) - dt, torch.tensor(0.0)) * impulse, torch.tensor(0.0)), -2)
/tmp/ipykernel_2746/3571425526.py:22: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
/tmp/ipykernel_2692/3571425526.py:22: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
v += torch.sum(torch.where(has_impact.unsqueeze(-1), impulse, torch.tensor(0.0)), -2)
</pre>
</div>
Expand All @@ -15458,8 +15458,8 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>torch JIT compilation: 0.036969639360904694
torch execution average: 0.03337471932172775
<pre>torch JIT compilation: 0.036141786724328995
torch execution average: 0.03381171450018883
</pre>
</div>
</div>
Expand All @@ -15469,7 +15469,7 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>/tmp/ipykernel_2746/3571425526.py:45: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
<pre>/tmp/ipykernel_2692/3571425526.py:45: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
print(f&#34;torch execution average: {torch.mean(torch.tensor(dt_torch[2:]))}&#34;)
</pre>
</div>
Expand Down Expand Up @@ -15545,8 +15545,8 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>tensorflow JIT compilation: 0.37863877415657043
tensorflow execution average: 0.03856977820396423
<pre>tensorflow JIT compilation: 0.20972128212451935
tensorflow execution average: 0.03771407902240753
</pre>
</div>
</div>
Expand Down
Loading

0 comments on commit c9ecbfb

Please sign in to comment.