Conditional generative advisarial network (cGAN) able to reproduce path traced, scene dependent, renderings given conditional rendering image buffers such as depth buffer, direct lighting, normal buffer and albedo capable of global illumination. The cGAN architecture is a U-Net commonly used in segmentation and is modeled after Manu Thomas and Angus Forbes's DeepIllumination (https://arxiv.org/abs/1710.09834). The path/ray tracer is implemented with VTK-m by Mark Kim (https://github.com/m-kim/raytracingtherestofyourlife) and follows the design explained by Peter Shirley in 'Ray Tracing// The Rest of Your Life".
-
Notifications
You must be signed in to change notification settings - Fork 1
sam-lev/neural-net-path-tracer
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
conditional generative advisarial network trained on rendered images and conditional image buffers to automate path traced images and global illumination using vtk-m and pytorch
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published