Skip to content
This repository has been archived by the owner on Apr 28, 2023. It is now read-only.

Register promotion: limit the number of elements promoted per thread #587

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

ftynse
Copy link
Contributor

@ftynse ftynse commented Jul 26, 2018

cuda::MappedScop: introduce maxPrivateElements mapping option

This mapping option controls the maximum number of elements per thread
that are promoted into the private memory (hopefully, registers, but we
cannot guarantee this at the CUDA level). The value is optional in the
protocol buffers. When not provided, query the maximum number of
threads per block from CUDA device properties and divide it by the
number of threads in the block to obtain the per-thread limitation.
Note that using all registers in a single block will likely limit the
occupancy of SMs, potentially degrading performance. Introducing the
limiting factor is primarily motivated by this effect, and it lets the
caller to require the mapper to use less registers, potentially
increasing the occupancy. Since register allocation is performed by the
downstream compiler, this option is a mere recommendation and is
expressed in terms of (untyped) elements rather than actual registers.
It would be impossible to account for all registers required by the main
computation (that is, necessary to store the data loaded from memory
during operations) at the CUDA level, that also contribute to the
register pressure of the kernel.

Although limiting the number of promoted elements number of registers
available per thread may seem too constraining for occupancy, it is
strictly better than the current approach where we may promote even more
elements, which then get spilled into the slow local memory.

Closes #556

ftynse added 8 commits July 25, 2018 16:33
The captured variable was not used inside the lambda since it was
introduced in the prehistory.
This function will be reused in an upcoming commit to sort groups before
register promotion.
This function will be reused in an upcoming commit.
Follow the same strategy as with shared memory promotion: first, sort
tensors in decreasing order of the total number of references; then, for
each tensor, sort groups based on the number of references in this
group.  Tensor groups with more references are expected to benefit more
from promotion as more global memory accesses may be avoided thanks to
explicit caching in faster layers of the memory hierarchy.  Note that
since there is no limit on the number of registers to use, all groups
that can be promoted into registers are promoted, and the sorting has no
effect on the outcome.  Such limit will be introduced next.
Introduce the per-thread limit on the total number of registers to use
during promotion.  This limit does not differentiate between the data
types because we cannot control the register allocation at CUDA level
anyway.  It rather serves as a controllable input to the promotion
heuristic.
The limit applies per thread and is cumulated for all subtrees where
promotion is performed.  By default, it is set to SIZE_MAX, which
ensures backwards-compatible behavior for all sensible cases (if
something had required more than SIZE_MAX registers, it would have been
spilled to global memory and still would not have fit).  This limit will
be exposed as a mapping option in an upcoming commit.
This will be used in computation of the default number of elements to
promote to private.
This platform-neutral function to query the number of registers will be
used in an upcoming commit.
@ftynse
Copy link
Contributor Author

ftynse commented Jul 26, 2018

@caffe2bot retest this, please

This mapping option controls the maximum number of elements per thread
that are promoted into the private memory (hopefully, registers, but we
cannot guarantee this at the CUDA level).  The value is optional in the
protocol buffers.  When not provided, query the maximum number of
threads per block from CUDA device properties and divide it by the
number of threads in the block to obtain the per-thread limitation.
Note that using all registers in a single block will likely limit the
occupancy of SMs, potentially degrading performance.  Introducing the
limiting factor is primarily motivated by this effect, and it lets the
caller to require the mapper to use less registers, potentially
increasing the occupancy.  Since register allocation is performed by the
downstream compiler, this option is a mere recommendation and is
expressed in terms of (untyped) elements rather than actual registers.
It would be impossible to account for all registers required by the main
computation (that is, necessary to store the data loaded from memory
during operations) at the CUDA level, that also contribute to the
register pressure of the kernel.

Although limiting the number of promoted elements number of registers
available per thread may seem too constraining for occupancy, it is
strictly better than the current approach where we may promote even more
elements, which then get spilled into the slow local memory.
@ftynse ftynse force-pushed the sort-register-promotion branch from 22b36e1 to 74d3e85 Compare July 26, 2018 11:45
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants