Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
tr7200 authored Apr 11, 2021
1 parent e83ffe1 commit d2ae430
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In [arXiv 2008.13038](https://arxiv.org/abs/2008.13038), I take that structural

Why? Considering the intrinsic properties of the causal Bayesian neural network, the processing of tuning such a neural network can reveal much about the relationship between the features and firm performance. If hyperparameter tuning chooses neural network layer densities that are wider than the number of features in a particular node, then that indicates greater aleotoric uncertainty, and that prediction might improve with the addition of other features. In domains such as alternative data for investments and [other areas](https://youtu.be/DEHqIxX1Kq4) [of finance](https://youtu.be/LlzVlqVzeD8), that can indicate causal indicators that could use additional research.

Extrinsically, the neural network provides a prediction interval with upper and lower bounds that provides an estimate of the amount of uncertainty in prediction. This research shows that removing the SEM node with the weakest causal connection to firm performance speeds up the time to finding those estimates when uncertainty is added to the neural network using the Kullback-Leibler divergence. This opens up the possibility of estimating SEM in this manner after each epoch, in a manner similar to [Dropout](https://patents.google.com/patent/US9406017B2/en) in neural networks. This would apply to nodes/edges instead:
Extrinsically, the neural network provides a prediction interval with upper and lower bounds that provides an estimate of the amount of uncertainty in prediction. This research shows that removing the SEM node with the weakest causal connection to firm performance speeds up the time to finding that estimate when uncertainty is added to the neural network using the Kullback-Leibler divergence. This opens up the possibility of estimating SEM in this manner after each epoch, in a manner similar to [Dropout](https://patents.google.com/patent/US9406017B2/en) in neural networks. This would apply to nodes/edges instead:

![https://mir-s3-cdn-cf.behance.net/projects/404/11278773.54812a20214b7.jpg](https://mir-s3-cdn-cf.behance.net/projects/404/11278773.54812a20214b7.jpg)

Expand Down

0 comments on commit d2ae430

Please sign in to comment.