Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Apply rewrite for normal attention and MQA
Fixes a bug introduced in mlc-ai#1052, where use of the `--use-flash-attn-mqa` flag on a model that doesn't use MQA would prevent the use of CUTLASS attention at all.
- Loading branch information