Skip to content

Commit

Permalink
#0: Fix MLP W3 kernel config
Browse files Browse the repository at this point in the history
  • Loading branch information
mtairum committed Feb 4, 2025
1 parent 9f14ae6 commit aac80d1
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion models/demos/llama3/tt/llama_mlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,9 @@ def forward(self, x: ttnn.Tensor, mode) -> ttnn.Tensor:
x,
self.w3,
compute_kernel_config=(
self.args.compute_kernel_config_lofi if self.four_bit_mlp else self.args.compute_kernel_config_hifi2
self.args.compute_kernel_config_lofi
if self.four_bit_mlp
else self.args.compute_kernel_config_hifi2_fp16
),
core_grid=None, # FIXME: validate on TG ttnn.CoreGrid(y=8, x=8) if not pc_3 else None,
dtype=ttnn.bfloat16,
Expand Down

0 comments on commit aac80d1

Please sign in to comment.