Skip to content
This repository has been archived by the owner on Jul 1, 2021. It is now read-only.

Latest commit

 

History

History
1132 lines (1014 loc) · 53.2 KB

project.org

File metadata and controls

1132 lines (1014 loc) · 53.2 KB

Chess game animation in blender

\newpage

Introduction

Project aims

This project aims to demonstrate a sufficient knowledge of computer graphics techniques and implementations through the creation of a visually appealing chess game animation tool. This was accomplished using Blender, and its python scripting API (application interface).

This project utilises two tools to create the chess animation

  • Blender
    Blender had a large appeal due to is its extensibility through Python for this project The exposed API allows its users to script typical actions, and develop add-ons in a familiar and standard environment. The scripting API allows the user to add and remove objects, insert key-frames, and change the properties of an object, anything a user can do with a mouse and keyboard, is able to be configured programmatically. This allows Blender to be used as a front end to any Python or C++ program.
  • python-chess library
    The python-chess library is a chess library for python with move validation, generation, and PGN (Portable Game Notation, the most common file format for chess games) parsing. This library is designed to function as back-end, making it perfect to use in conjunction with Blender.

The plan

During the proposal, the plan was to create an interactive, real time chess board, however this was changed as the techniques and concepts would be severely limited. Blender with python scripting was a perfect compromise.

Blender implementation

Modelling, textures, and shading

The scaling of the models was deliberately chosen to be unrealistic in order to simplify the translation from position with the back to front end (See Python side - Array index to world space)

Nodes

Blender nodes allow for the creation of textures and shaders through a pipeline of simple operations to create complex procedural results. Its simple to understand visual workflow is a popular alternative to layer based compositing. cite:node-vs-layer

Throughout this project, procedural texture and shading generation using nodes was used instead of the traditional texture wrapping using UV mapping in order to give objects consistent and appealing surfaces. This provides two relevant benefits;

  • Changing textures and shading requires no additional work than adjusting values on the respective node.
  • Textures can be applied to any model without fitting issues, i.e. repetition and resolution.

Shaders

A shader is a program, typically run on the GPU, to compute the colour of an individual or group of pixels. These shaders describe the lighting interactions of objects or surfaces, such as reflection, refractions, and absorption.

Principled BSDF

A BSDF (Bidirectional Scattering Distribution Function) describes how light scatters on a surface. In computer graphics, computing a highly detailed microsurface is not feasible, instead it is replaced with a simplified macrosurface (See Figure ref:micro-vs-macro). As the surface no longer retains the detail it would in reality, light behaves differently on this new macrosurface. To compensate for this a BSDF is used that matches the aggregate direction scattering of the microsurface (at distance). cite:ggx-paper

file:Images/macro vs micro.png

Blenders implementation breaks down this down into two separate functions by assuming that the microsurface can be adequately described using a microfacet distribution function and a shadowing-masking function.

Blender provides these options for the distribution and shadowing functions, and the subsurface methods.

  • Distribution[fn:3]
    • GGX
      GGX is a BRDF (bidirectional reflection distribution function) which aims to be a faster than its alternative Multiple-scattering GGX however at the cost of physical accuracy. The MDF describes the distribution of microsurface normals m (Figure ref:micro-vs-macro) while the shadow masking function describes what fraction of the microsurface normals m are visible. cite:ggx-paper

      In GGX the shadow masking function does not account for reflections or scattering. This can create excessive darkening and a loss in energy conservation in some areas cite:principled-bsdf-docs.

    • Multiple-scattering GGX
      Almost all popular parametric BSDF’s consider only single reflection to account for self-shadowing and omit outgoing light that scatters multiple times between microfacets. Omitting outgoing light breaks conservation of energy and leads to dark patches within rough surfaces cite:ms-ggx-paper.

      Images/multiplescatteringsmith_teaser.png

      Blenders Multiple-scattering GGX BRDF allows for multiple light bounces within microfacets to achieve 100% energy conservation and provide a more physically accurate render cite:principled-bsdf-docs,ms-ggx-paper . It accomplishes this by conducting a random walk on the microsurface until the ray escapes. Unlike GGX there is no known analytical expression for this model (Blender’s specific implementation), it must instead be solved stochastically cite:blender-issue-tracker. This comes at a performance cost, the original papers cites a 19% penalty using a Monte Carlo physically based renderer, Blenders development forums estimates the performance penalty to be approximately 3% at the time of implementation cite:blender-issue-tracker.

  • Subsurface Scattering Method
    Subsurface scattering is when light penetrates into an object that is normally opaque and interacts and exits the material at a different point. Described by a BSSRDF (bidirectional subsurface scattering reflectance distribution function)
    • Christensen-Burley
      The Christensen-Burley method is an approximation of a physically based volumetric scattering system with faster evaluation and efficiency cite:Christensen-Burley.
    • Random walk
      Opposed to the approximations use the in the Christensen-Burley model, the Random walk modelling uses true volumetric scattering inside the mesh. Due to this the model does not perform well when the mesh is not closed. This accuracy comes at a cost of rendering time (actually performance hit is largely dependent on the model itself), and increased noise.

      Accuracy within the subsurface scattering was not an area of importance within this report thus the Christensen-Burley model was chosen due to its better performance.

All renders within this report have Multiple-scattering GGX enabled as the benefit outweighed the cost.

The Principled BSDF shader is a combination of multiple layers into a single node. This is done for ease of use.

This shader encapsulates bidirectional reflectance and transmittance distribution functions. Individually these functions determine how light behaves on the surface and inside a material.

Pieces

Pieces were modelled after the reference image below Figure ref:piece-reference. From this image the pieces where traced using the Add Vertex tool, from the Add Mess Extra Objects add-on. To transform this line of vertices to a solid object a Screw modifier was applied.

ref/bee5aa3d08a30da4ca1005cbd0fe10b54a03bb49.jpg

file:Images/modelling piece inprogress.png

file:Images/screw settings.png

file:Images/pawn model.png

The notable changes from the default settings is the lowering of the steps from \(16 → 10\) and disabling Smooth Shading. This was a stylistic choice as the low polygon look would better demonstration reflections and the planned indirect lighting (See Lighting - Disco Ball).

To model the knight, 3 separate reference images where used. The base was constructed in a similar manner to the other pieces. The head was modelled manually.

file:ref/knight front.jpg

file:ref/knight right.jpg

file:ref/knight back.jpg

Additionally ico-spheres where added to some pieces for additional detail.

The final piece models appear as below.

Images/Pieces.png \newpage

Board

Chess board

The chess board model is a simple rectangular based prism with dimensions 8m x 8m x 0.4m. The checker board texture comes from the Checker Texture, with scale=8.0 and black and white colours. This texture output is fed into the base colour input of a Principled BSDF shader node.

file:Images/checker texture.png

Within world space the board was positioned in the positive positive quadrant such that the very bottom left handle corner of the board was at 0,0 with each squares dimensions as 1m x 1mx. This positioning becomes important in Python implementation - Array index to world space. \newpage

Marble exterior

Images/normal.png

The marble exterior was added to showcase reflections, shadows, bloom, and specular highlights through the use of procedural texture and normal mapping.

To accomplish this texture layered Perlin noise was used. The first set of switch derives its coordinators from the Texture Coordinate node which is set to use the mesh [fn:6]. The Mapping node is extraneous in this example, simply used to move the Perlin noise map around. The second layer of Perlin noise uses the already nosy surface to create the dark patches marble typically features. The colour ramp is used to brighten and shade the darker patches of noise. This colour is then used as the base colour for the BSDF shader. To give this texture some depth the colour output of the ramp is used to create a normal map. Colour data (yellow node), when used as a vector input can yield unexpected results. To remedy this the colour data is first passed through a Bump node as the height data and then fed into the BSDF shader.

Images/marbletextire.png

file:Images/Marble cycles.png

file:Images/Marble eevee.png \newpage

Particle effects

Explosions

Initially an explosion was planned on each piece capture as a way to add some flare. However this was quickly scrapped as the additional baking and rendering time were deemed too costly. A demo render featuring a smoke cloud was created and can be viewed here master/Videos/smoke.mp4.

Confetti

As an alternative to explosions, a confetti shower on the winning king (only checkmates received confetti) was added instead.

The confetti is still an explosion but with the outgoing particles set to a collection of “confetti’s”. The source of the explosion is a upwards facing dome, this gives the confetti a even spread. The particles are set to have randomised, size, rotation, angular velocity, and normal velocity. To achieve the slow fall effect the effective gravity of the particle simulation was lowered to 38% its normal strength. Each confetti has a texture with Roughness 1.0 and varying base colours.

Images/confetties.png

file:Images/confetti dome.png

A sample render can be seen below.

file:Images/Confetti! cycles.png \newpage

Lighting

The power of Blender lights is measured in Watts, however this is not the same wattage consumer lights bulbs are rated in. Blenders light power is rated in radiant flux, which is the measure of radiant energy emitted per unit time opposed to consumer light bulb which are rated in lumens, or luminous flux cite:radiant-flux,luminous-flux. Luminous flux differs from radiant flux in that luminous flux is adjusted to the varying sensitivity of the human eye. cite:luminous-flux

Real world lightPowerSuggested Light Type
Candle0.05 WPoint
800 lm LED bulb2.1 WPoint
1000 lm light bulb2.9 WPoint
1500 lm PAR38 floodlight4 WArea, Disk
2500 lm fluorescent tube4.5 WArea, Rectangle
5000 lm car headlight22 WSpot, size 125 degrees

Direct

To directly light the scene four spot lights were placed at the corners of the board. Spot lights were chosen instead of typical point lights or sun lights to give consistent directional lighting during the camera spins. The four lights are set to track the centre of the chess board.

Images/lighting.png To account for the unrealistic scale the power of the lights was set to higher than normal. However, this was still not enough. To avoid cranking the lights to approximately 1MW the exposure within the film settings was adjusted to compensate as per the Blender documentation cite:light-power-docs.

The blend for this light was set to 0 as the cone fully encompasses the board.

Indirect

There is minimal indirect lighting within the final scene (all models reflect some amount of light due to their texturing, however this is not a significant amount). Considering this a second scene was created which featured considerable indirect lighting. To accomplish this a disco ball was implemented.

Disco ball

Whats better than chess? That’s right, disco chess! There is no better way to demonstrate indirect lighting that a giant ball of mirrors in the middle of the board.

The disco ball model is a simple iso-sphere with a mirror like texture. Its rotation is achieved through simple key-frames.

To create a mirror in Blender the Roughness parameter on the Principled BSDF shader node is set to 0. This alone didn’t make a very convincing disco ball, in addition to the Roughness, the following values were tweaked,

  • Metalic 1.0
    This change made the ball most disco like as it gives a fully specular reflection tinted with the base colour without diffuse reflection or transmission.
  • Specular 1.0
    While the Matallic value is responsible for the majority of the specular reflection, this change gave the ball a halo like glow.
  • Base colour
    With the two previous changes the disco ball appeared too uniform and reflective, some faces appeared completely blown out with no variance between them. To remedy this a Voronoi noise texture’s colour output was piped into a colour ramp (this is done to avoid the very pink look the Voronoi noise produces).

Images/render_shader-nodes_textures_voronoi_smoothness-color-zero.png A Voronoi noise texture is as procedural Worley noise function evaluated at the texture coordinate. The patterns are generated by randomly distributing points. From these points a region is extended, the bounds of this region is determined by some distance metric. The standard Euclidean distance is used here.

Images/Purple.png

Images/discoclose.png

Images/discoball_texture.png \newpage With the disco ball texture and model complete a two red and blue lights were trained on the disco ball. The results can be seen with the Render engine section.

Render engine

Blender offers two modern rendering engines, when working on a project it is important to keep in mind what engine will be used for the finally render in ord to account for the limitations of both. For this project Eevee was the chosen engine due to the significantly lower render times than Cycles, however renders using both engines were still made in order to compare them.

This project also made use of third engine, Luxcore. ~Luxcore~ is a free and open source rendering engine designed specifically to model physically accurate light transportation. \newpage

Eevee

Blenders Eevee is designed to be used within the view port for real time rendering previews Eevee must cut corners to achieve this speed. Although these approximations are physically based, the behaviour of light suffers. By default mirrors do not function (without explicitly enabling) and caustics are not present at all.

Eevee’s pipeline, while lacking signifcant documentation, is that of a typical rasterisation engine.

Images/RenderingPipeline.png

  1. Vertices are retrieved from the buffer object [fn:5]. This includes vertex colours, UV coordinates, and vertex locations of polygons. a. The retrieved vertices are transformed to the post-projection space by the vertex shader. b. (Optional) Tesselation is where patches of vertex data are subdivided into smaller interpolated points. This is useful to dynamically add or subtract detail from a polygon mesh. c. (Optional) The geometry shader is used to conduct layered rendering which is useful when cube based shadow mapping or rendering a cube enrichment map without having to render the entire scene multiple times.
  2. Vertex post-processing - output from the previous stages is collected and prepared for the following stages. a. In Transformation Feedback the output of the Vertex Processing stages is recorded and placed into buffer objects, preserving the post-transformation rendering state such that it can be resubmitted to various processes. b. Primitive Assembly prepares the primitives for the rasterizer by dividing them into a sequence of triangles and stored in an efficient method. c. Geometry outside of the view is culled and vertices are transform from NDC to screen-space.
  3. Rasterization projects the geometry onto a raster of pixels and outputs a collection of fragments. Each fragment represents a rasterized triangle. Multi-sampling occurs when multiple fragments come from a single pixel.
  4. The fragment shader takes these fragments and computes the z-depth for each pixel along with the colour values. Colour is computed using the surfaces BSDF.
  5. Per-sample operations are used to cull fragments that are not visible, and determine the transparency.

Outside of this pipeline (concurrently) are compute shaders, often used to compute arbitrary information, i.e. tasks not directly related to drawing triangles or pixels. Particles and fluid simulations, and terrain height map generation are common application of compute shaders.

Cycles

Images/render_cycles_render-settings_light-paths_rays.png Cycles is a physically based backwards path tracing rendering engine. While this engine is physically based, it is not physically correct. It does not aim to be either cite:design-goals.

\newpage

Images/shad2-globalillum1a.png In backwards path tracers the paths are generated starting from the camera. They are bounded around the scene until encountering the light source they “originated” from. This is considered an efficient method for direct lighting as a ray will always yield some result opposed to the forward path tracing where many light rays may never reach the camera.

Cycles offers two integrators, Path Tracing and Branched Path Tracing. Where Branched Path Tracing differs is that as a pure path tracer, each bounce of the light ray bounces in one direction and will pick one light to receive lighting from, while a Branched Path Tracing will split the path for the first bounce into multiple rays and sample from multiple lights. Naturally this makes sampling considerably slower, however it offers lower noise in scenes primarily lit by a one or single bounce lighting. It is possible to split the ray on successional bounces as well however the complexity will increase exponentially for diminishing returns.

To combat noise the OptiX denoiser was employed as it operates best on NVIDIA hardware.

Being a jack of all trades Cycles suffers in some areas when compared to specialised engines such as Luxcore. Specifically Cycles suffers significantly in advanced light effects such as caustics as explicitly noted in their documentation cite:blender-sampling.

Images/caustics2.png Caustics are the optical phenomenon of light patterns forming on surfaces through reflections and refraction. These are notably difficult to calculate efficiently [fn:7] using a unidirectional path tracer as many rays will not collide with the object that should be focusing light. One solution to this is a technique called photon mapping. In photon mapping an initial pass of the scene is done using a forwards path tracer to follow the path of light rays as they interact with glossy and refractive objects.

Cycles is significantly slower that Eevee (1min vs 16 seconds for some single frame renders). With adaptive sampling the number of samples conducted in less noisy areas is automatically results in a performance improvements.

Thank you to Jack

Due to significant hardware limitations for ray-tracing (GTX 760, i5-4670), a favour was called in with a good friend, Jack kindly lent his RTX 2070 for a cycles render. See master/Videos/Marble_cycles.mp4.

Luxcore

Another algorithm to calculate caustics is bidirectional path tracing. For each sample two paths are traced independently using forward and backwards path tracing. From this every vertex of one path can be connected directly to every vertex of the other. Weighting all of these sampling strategies using Multiple Importance Sampling creates a new sampler that can converge faster than unidirectional path tracing despite the additional work per sample. This works particularly well for caustics or scenes that are lit primarily through indirect lighting as instead of connection rays to the camera or source, instead rays are connected to each other. This allows rays that are very close to form a path cite:Caustic-Connection.

file:Images/bidirectional diagram.png

Luxcore documentation recommends enabling the Metropolis sampler when using bidirectional path tracing. This allows Luxcore to spend more time sampling bright areas of the image and thus rendering caustics with greater accuracy. However, this causes significantly more RAM usage. This was not enabled during the Luxcore render due to memory restrictions.

Luxcore offers caching of caustics through the PhotonGI caustic cache option however this is only applicable when the scene this static (with the exception of the camera) and traditional path tracing is used.

Luxcore does not finish rendering a scene like Eevee and Cycles, instead the user must set halt conditions manually. For all renders by within this report the halt time was set to 10mins. This gave the engine ample time to converge.

Tragedy - 22:20, 01/June/2021

At 10:20pm on the first of June the PC that had been enslaved to rendering a Luxcore render for more than 96 hours straight, died. Official cause of death is unknown but it suspected to be something to do with power delivery.

A successful data recovery was conducted the next morning. Only the last 12 frames were missing, they were rendered on another device. See master/Videos/disco_luxcore.mp4

Python implementation

Processing games

Reading and stepping through games is handled almost entirely by the chess library. No special considerations need to be made here. The minium working example below demonstrates all that is necessary to step through an entire game.

import chess
with open(filename) as pgn:
    game = chess.pgn.read_game(pgn) # Parses pgn file
    board = game.board()

    for move in game.mainline_moves():
        board.push(move) # Pushs the move to the move stack, this "makes" the move

Pairing problem

During a game of chess there is nothing in between moves, simply one discrete board state after another. This is also how the chess library makes moves, by computing differences and tracking board states, while this is reliable and simple it does not play nice when games become continuous (animated).

Initially this script also tracked the board state using a dictionary, with the square as the key, and corresponding blender object as the value, pushing and pop at each move. However, this presented difficulties when implementing animations and special moves. The code was generally cluttered and not up to an acceptable quality.

The solution

To remedy the mentioned problems a custom class was devised, and aptly name CustomPiece. This class acts as a generalised representation of a piece which is able to act upon itself and the Blender model it puppets. Stored within an unrolled 2d array with the index representing its position on the chess board (See Python implementation - Array Index to world space) the object is able to move itself within the array while handling move and capture animations. Special move handling is generalised into the main loop, (See Python implementation - Special moves).

This design approach has clear advantages such as

  • Adheres to the Model-View-Controller design philosophy.
  • Array and object manipulation is not handled at any higher level than required.
  • Translation between the chess library interface and Blenders API is seamless.
  • Creates a unique object that pairs a Blender model to a python-chess PieceType.

However, the self-referential nature of objects manipulating the array their are stored in adds significantly to the complexity. Luckily the implementation is simple.

An initial sketch of this class can be seen here ref:class-sketch.

Implementation can be see here ref:class-src.

Array index to world space

python-chess provides great functionality to retrieve what square a move is coming from, and going to. Internally this is stored as a int representing each square in 1d array notation.

Square = int
SQUARES = [
    A1, B1, C1, D1, E1, F1, G1, H1,
    A2, B2, C2, D2, E2, F2, G2, H2,
    A3, B3, C3, D3, E3, F3, G3, H3,
    A4, B4, C4, D4, E4, F4, G4, H4,
    A5, B5, C5, D5, E5, F5, G5, H5,
    A6, B6, C6, D6, E6, F6, G6, H6,
    A7, B7, C7, D7, E7, F7, G7, H7,
    A8, B8, C8, D8, E8, F8, G8, H8,
] = range(64)

\newpage

Images/array.png

To convert from array indexing two simple expressions were used. \[x = (\text{INDEX mod } 8) + 0.5\] \[y = (\text{INDEX div } 8) + 0.5\][fn:4] Note the addition of \(0.5\) is to centre the pieces on the board squares in world space and will be excluded from further examples.

Abuse of this functionality

Images/tikzit_image0.png

While modulo will always produce a positive integer between \(0 → 7\), integer division can result in negative numbers and is not bounded. Using this the mapping can be extended past the board it was designed for.

This provides an easy method to place captured pieces after their animation. By storing each pieces initial position, and adding or subtracting \(16\) depending on the colour, pieces can be placed \(2\) rows behind their initial position.

Two rows behind was preferable to the respective position on the other side of the board to avoid the inversion required so that the pawns would be in front of the back rank pieces.

\newpage

Special moves

Figure ref:flowchart shows the main loop logic, used to move the correct pieces.

flowchart.pdf

Castling

Within standard chess there are only four castling possibilities, these are easy enough to check naively. This is the only section that limits this script to standard chess. To extend support to chess960, a bit-board mask of all the rooks with castling rights could be filtered to obtain the index of the rook that will be castled. See the documentation.

if board.is_castling(move):
    if board.turn: # White
        if board.is_kingside_castling(move):
            array[chess.H1].move(chess.F1)
        else: # queen side
            array[chess.A1].move(chess.D1)
    else: # Black
        if board.is_kingside_castling(move):
            array[chess.H8].move(chess.F8)
        else: # queen side
            array[chess.A8].move(chess.D8)

En passant

The python-chess library makes handling en passant a breeze. The move is checked if it is an en passant first, then as only one square is possible for an en passant on any move that position is retrieved.

else: # standard case
    if board.is_capture(move):# is en passant, great...
        if board.is_en_passant(move):
            array[board.ep_square].die() # NOTE, object is gc'ed
        else: # its a normal capture
            array[locTo].die() # NOTE, object is gc'ed

Promotion

Contained within a separate conditional is the promotion logic. This is handled separately from the rest of the logic as a move can be both a capture and a promotion.

array[locFrom].move(locTo) # NOTE, piece moves always

if move.promotion is not None:
    array[locTo].keyframe_insert(data_path="location", index=-1)
    array[locTo].hide_now() # hide_now unlinks within blender
    pieceType = move.promotion # piece type promoting to
    array[locTo] = CustomPiece(chess.Piece(pieceType, board.turn),\
                               SOURCE_PIECES[chess.piece_symbol(pieceType)],\
                               array, locTo) # shiny new object
    array[locTo].show_now()

A new key-frame is inserted initially as the piece that will promote has already been moved and the animation needs to finish before it can be hidden.

Within the Blender view port the pieces that will be promoted to already exist at the right position. This can cause double plaining, however they are not rendered until needed.

Animation

Key frames

To animate an object within blender two key-frames must be inserted with different values for some property at varying times. Blender will interpolate between them (See Python implementation - Interpolation for interpolation methods)

Key-frames for all pieces are inserted every move. This is done to ensure stationary pieces stay stationary. For every move the piece has \(10\) frames to complete its moving animation. Between each move there is a \(3\) frame buffer to provide some separation between moves.

In addition to piece animations, the camera also rotates at a rate of \(2ˆ\) per \(13\) frames.

FRAME_COUNT = 0
keyframes(array) # intial pos
FRAME_COUNT += 10
for move in game.mainline_moves():
    scene.frame_set(FRAME_COUNT)

    make_move(board, move, array)
    keyframes(array) # update blender

    camera_parent.rotation_euler[2] += radians(2) #XYZ
    camera_parent.keyframe_insert(data_path="rotation_euler", index=-1)

    board.push(move) # update python-chess

    FRAME_COUNT += 10
    keyframes(array) # update blender
    FRAME_COUNT += 3

While the camera’s rotation is tied to the length of the game, in order to continue spinning while the remaining animations (confetti and captures) finish additional key frames are added. Confetti is conditionally added to the winning king. No confetti for a draw.

confetti = bpy.data.collections["Board"].objects['Confetti source']
if board.outcome() is not None:
    winner = board.outcome().winner
    king_square = board.king(winner)
    xTo, yTo = square_to_world_space(king_square)
    confetti.location = Vector((xTo, yTo, 3))
    bpy.data.particles["Confetti"].frame_start = FRAME_COUNT
    bpy.data.particles["Confetti"].frame_end = FRAME_COUNT + 12

print(FRAME_COUNT)
for _ in range(5):
    scene.frame_set(FRAME_COUNT)
    camera_parent.rotation_euler[2] += radians(2) #XYZ
    camera_parent.keyframe_insert(data_path="rotation_euler", index=-1)

    FRAME_COUNT += 13

In order to move the camera with a fixed rotation and radius from the centre of the board the camera was made a child of a Empty Plain Axis. Rotations and translations applied to the camera parent are also applied to the camera. This allows for ease fixed distance rotations.

file:Images/camera parent.png

Interpolation

Blender offers 3 curves for interpolation between key-frames.

  • Constant
    Object value only objects on the last possible frame.
  • Linear
    Object value changes linearly between the key-frames to form piecewise continuous curve.
  • Bézier
    The object value is interpolated using a Bézier curve. Bézier curves are parametric curves used in computer graphics to create smooth surfaces, or in this case, a smooth function between two points.

    Blender implements a forward differencing method for a cubic Bézier curve evident from the source code cite:blender-source.

By default Blender uses Bézier curve interpolation for all motions. This is the preferred option for piece movement. However, linear was opted for as the camera motion although a cubic Bézier curve would produce the same outcome it made debugging slightly easier.

Reproducibility

This project was created used

Python environment

Blender is distributed with its own python installation for consistency, however this means that installed python modules are not present cite:blender-python-env. To mitigate this the --target flag for pip install can be used to install directly to the blender python environment cite:pip-install-man.

pip install -t ~/.config/blender/2.92/scripts/modules chess

This ensures Blenders Python will has access to the required libraries for this script to function.

Results

Animations referenced in this section are available on the public GitHub repo. They will be refered to by their filenames[fn:8] and hyperlinked. The still renders below may not appear within the animations as they may not be suitable to comparison, instead still renders from the same scenes are included to showcase these effects. The animations themselves and the appearance of these effects will be shown within the presentation.

  • ~Marble_eevee.mp4~
    This is the complete product. Rendered using Eevee
  • ~Marble_cycles.mp4~
    This is the path traced render of the final scene for comparison purposes. The scene and setup is identical other than the engine.
  • Marble_stacked_higher.mp4
    A side by side comparison between Eevee and Cycles
  • ~disco_luxcore.mp4~
    Disco chess was set to render long before this project was near completion. Although it is missing many key aspects and features its purpose is to demonstrate what bidirectional path tracing can do for caustics. Luxcore has a few notable incompatibilities with some aspects of the scene. This will be discussed in the presentation.

Images/confetti-eevee.png

Images/confetti-cycles.png

Images/confetti-eevee-1.png

Images/confetti-eevee-2.png

Images/confetti-eevee-3.png

Images/confetti-cycles-1.png

Images/confetti-cycles-2.png

Images/confetti-cycles-3.png

\newpage It is clear from the close up figures that Cycles produces significantly more believable lighting due to its ability to accurately computer indirect lighting. Eevee tries its best with its approximations of shadows however it fails to correctly illuminate the arch formed by two other pieces of confetti.

Between the two intersection pieces on the right image, Cycles correctly provides the indirect and ambient occlusion that should be present within such a formation. Eevee illuminates the whole purple face as if the pink piece was not present at all. The shadows cast by the pieces are also not present at all.

Eevee also struggles to deal with the twist in the teal piece (left image), Cycles correctly produces the indirect lighting and shadowing.

Images/reflections-eevee.png

Images/reflections-cycles.png

Images/reflections-eevee-1.png

Images/reflections-cycles-1.png

Images/reflections-eevee-2.png

Images/reflections-cycles-2.png

\newpage In Figure ref:reflections-close Cycles’s shadowing and reflectance create a sharper outline around the pawn. Eevee does not show this as the reflectance is an approximation using the depth buffer and the previous frame colour.

On the space tile Cycles is able to correctly compute the reflection of the piece in a sharp manner by a material with a mirror like finish. Eevee’s approximations once again cause the reflections to be blurred.

Within the same figure the normal map of the marble texture causes the light to waver reflect in a non-uniform manner.

In Figure ref:shading-close Eevee adds shadows without considering indirect lighting. Cycles produces a more realistic result. The intersection of all the shadows should not be the sum of its elements as the indirect lighting will illuminate it partially.

file:Images/Disco kinda working.png

Images/mpv-shot0001.jpg

Some of Cycles backwards rays do make it to the light source, unfortunately this is not accurate enough for the denoiser not to be fooled, it tries its best to clean up this noise and creates a faint pink glow on the board an pieces.

Due to Luxcore’s bidirectional path tracing it does not suffer from this and is able to compute the caustics accurately and within a reasonable convergence time. However, the image is still exceptionally noisy and requires longer rendering time to reach the noise thresholds of Cycles.

Animations were rendered to individual frames as PNGs, they were then assembled using ffmpeg.

ffmpeg -framerate 24 -s 1920x1080 -i %04d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p ../Videos/$FILENAME.mp4

Self-evaluation

Avoiding basic errors

I believe I have achieved all basic marking criteria and are deserving of all basic marks. (7 Marks)

Further additional marks

In addition to the basic marks I believe I have earned the following marks.

  1. Report is well-written and concise
    I believe this report is well-written and concise where possible while still covering many important topics. (1 Mark)
  2. Report text and image examples support each other well
    Images and diagrams were used in conjunction with text to better portray my ideas and understanding. Screenshots of texture configurations and the accompanying reasons are an example of this. (1 Mark)
  3. An ample number of illustrative images are used to clearly demonstrate the techniques
    I have consistently used images to illustrate concepts and techniques used along with the results of texture generation, modelling, and rendering. (1 Mark)
  4. It is evident from the report whether or not the techniques implemented were appropriate for achieving the task
    I believe I used appropriate techniques to create a visually appealing model. For example, MIS, Screen Space Reflections, Adaptive Sample, and Caustics are relevant to the project and work to create a visually appealing animation. (1 Mark)
  5. Techniques chosen were carefully considered and compared with alternatives demonstrating insight into the design decision process
    I chose techniques thought to best demonstrate adequate knowledge while working towards a visually appealing result. Many features and algorithms were consider which are more physically correct or such but were not considered cost worthy or feasible. (1 Mark)
  6. Report demonstrates significant work testing and applying techniques
    Throughout this report I have shown significant testing through the application and comparison of techniques including differences between rendering engines and their effect on caustics. (1 Mark)
  7. The report demonstrates that independent research extending beyond the specific course taught techniques have contributed to the final project in a meaningful way
    I have delved into the source of Blender to find methods and algorithms used where I found documentation to be lacking. I found, read, and cited original papers detailing GGX, MSGGX, Christensen-Burley model, and Random walk model. (2 Marks)
  8. Large variety of techniques applied
    I have used;
    • Procedural texture and normal map generation using noise functions
    • Particle systems and explosion
    • Direct and indirect lighting
    • Reflections, bloom, specular reflection and other lighting effects
    • Advanced lighting techniques (caustics)
    • A variety of rendering algorithms including rasterization, path tracing, and bidirectional path tracing

    (2 Marks)

  9. Final product looks visually appealing and complete
    Throughout this project I always aimed to create something visually appealing. I believe I have excelled at this and made something I would display publicly with pride. (2 Marks)

\newpage

Appendix

Scratchpad.pdf

class CustomPiece():
    def __init__(self, pieceType: chess.Piece, blender_obj: bpy.types.Object,\
                 array: List[Optional[CustomPiece]], loc: int):
        self._pieceType = pieceType.piece_type # int
        self._colour = pieceType.color         # bool
        self._blender_obj = blender_obj.copy()
        self._array = array                    # reference to array containing self
        self._inital_loc = loc
        self._loc = loc                        # int (1d array index)

        x, y = square_to_world_space(self._loc)
        self._blender_obj.location = Vector((x, y, 0.3))

        # set material based on colour
        if self._colour:
            self._mat = bpy.data.materials["White pieces"]
        else:
            self._mat = bpy.data.materials["Black pieces"]
        self._blender_obj.active_material = self._mat


        if self._colour and self._pieceType == chess.KNIGHT:
            self._blender_obj.rotation_euler[2] = radians(180) #XYZ
        # add object to collection so its visable
        bpy.data.collections[['Black', 'White'][self._colour]].objects.link(self._blender_obj)

    def move(self, new_loc: int, zTo: float = 0.3):
        xTo, yTo = square_to_world_space(new_loc)
        self._blender_obj.location = Vector((xTo, yTo, zTo))
        print("Moved to ", self._blender_obj.location)

        self._array[new_loc] = self
        self._array[self._loc] = None

        self._loc = new_loc

    def die(self) -> CustomPiece:
        self._array[self._loc] = None
        self.keyframe_insert(data_path="location", frame=FRAME_COUNT-6)

        xTo, yTo = square_to_world_space(self._loc)
        self._blender_obj.location = Vector((xTo, yTo, 2.1))
        self.keyframe_insert(data_path="location", frame=FRAME_COUNT+3)

        if self._colour:
            self._inital_loc += -16
        else:
            self._inital_loc += 16

        xTo, yTo = square_to_world_space(self._inital_loc)
        self._blender_obj.location = Vector((xTo, yTo, 2.1))
        self.keyframe_insert(data_path="location", frame=FRAME_COUNT+21)

        xTo, yTo = square_to_world_space(self._inital_loc)
        self._blender_obj.location = Vector((xTo, yTo, 0.1))
        self.keyframe_insert(data_path="location", frame=FRAME_COUNT+29)

        return self

\newpage \printbibliography

Footnotes

[fn:8] higher/lower within files names refers to the bit-rate of the assembled images.

[fn:7]There is nothing inherently difficult about caustics, any path tracer can render them. Instead it is a matter of convergence speed. [fn:6]A very interesting effect can be achieve by using the Camera as the source of the coordinates. The marble texture flows like sea foam as the camera spins.

[fn:5]A buffer object is an array of unformatted memory allocated by the GPU. [fn:4]Note div here is integer division.

[fn:3]Note that the Distrubution option Blender gives is different from the Microfacet Distrubution Function, and includes both the MDF and the Shadow-masking function.

[fn:2]Blender comes bundled with this version. If the system python is used instead ensure it matches the version Blender was built with and is above 3.7 for the __future__ module. Past 3.10 the __future__ module is no longer required.

[fn:1]This project requires the Outcome class released in 1.5.0