Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uplift PyBuda changes (week29) #39

Merged
merged 48 commits into from
Aug 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
c281c05
Patch perceiverio, segformer, tri_basic_2 model ci failures
chandrasekaranpradeep Jun 25, 2024
bbe76f9
Fix pybuda pipeline failures (24/06)
ashokkumarkannan1 Jun 25, 2024
f1bf75e
Add tests according to test-plan for sparse matmul
kmilanovicTT Jun 26, 2024
42a7854
Rework epoch break
nobradovictt Jun 11, 2024
336f9e4
FE changes for multichip multi card data parallel
jnie-TT Jun 26, 2024
83a26eb
Fix fuse parse error in DistilBert
ashokkumarkannan1 Jun 26, 2024
e358337
Remove Models in Push Pipeline
dsudhakarTT Jun 27, 2024
36e8342
Fix pybuda pipeline failures (26/06/2024)
kamalrajkannan78 Jun 27, 2024
cde57e7
Removed few models in tests_A
dsudhakarTT Jun 28, 2024
8fee720
Fix dla and efficientnet model ci failures
chandrasekaranpradeep Jul 2, 2024
7444bf1
Fix ddrnet core dump issue
dsudhakarTT Jul 2, 2024
27be780
Add tests for yolox(pytorch) model - GS(e300 & e150)
kamalrajkannan78 Jul 2, 2024
5ceb546
Save failing tests separately
vbrkicTT Jun 24, 2024
94595d7
Move random seeds to test_context
vbrkicTT Jul 2, 2024
1f1a7d8
Random input order
vbrkicTT Jul 2, 2024
7f0697d
[tti] add warning when trying to save TTI image with CPUDevice
svuckovicTT Jul 2, 2024
e3cf21e
Remove few models in tests_B
dsudhakarTT Jul 5, 2024
774e03a
BBE update to bbe_to_pybuda_release_20240612_week24
vmilosevic Jul 5, 2024
3f663cc
Add fix for pybuda nighly failures
meenakshiramanathan1 Jul 8, 2024
9eb4ed5
Remove Partially compiled models
dsudhakarTT Jul 8, 2024
a708cf7
[Blackhole] Add 64 byte host queue alignment
jserbedzijaTT Jul 8, 2024
b2230f8
[CCM] Reconstruct and reorganize internal and customer files and upda…
chandrasekaranpradeep Jul 8, 2024
21ee36a
Remove nlp onnx models in push pipeline
dsudhakarTT Jul 10, 2024
60dfa8a
MNIST overfit, PyTorch vs PyBuda
nvukobratTT Jul 11, 2024
ec05778
Add yaml configurations for yolox-n,t,s,m(e300,e150) demo script -pyt…
kamalrajkannan78 Jul 11, 2024
aac4a6c
MNIST Training: Support for loss on TT device
nvukobratTT Jul 12, 2024
769e376
Use property for datatypes
vbrkicTT Jul 3, 2024
8546e19
Fix randomize size
vbrkicTT Jul 4, 2024
d683689
Constant input for RGG graphs
vbrkicTT Jul 3, 2024
8488a0e
Operator tests utils
vbrkicTT Jul 11, 2024
8d5d9e0
Remove models nlp pytorch
dsudhakarTT Jul 9, 2024
f2d67cc
[Blackhole] Fix issue with DRAM channel size
jserbedzijaTT Jul 15, 2024
30cc5de
Refactor multicard dp API to bypass compile/shutdown on multiple runs
jnie-TT Jul 16, 2024
994e3ea
Upgrade onnx and onnxruntime, make necessary docker/make changes
LPanosTT Jul 17, 2024
5a78e36
Remove nlp tensorflow models
dsudhakarTT Jul 15, 2024
9cf9f5e
Test all element-wise binary operators according to test plan
vobojevicTT Jul 19, 2024
c8c1043
NetlistValidation utils
vbrkicTT Jul 17, 2024
5711bea
Decompose downsample 2d for non-square shape and add channel last sup…
chandrasekaranpradeep Jul 18, 2024
616418f
Quantize-Dequantize Support
LPanosTT Jul 19, 2024
9f7e579
Add BOS models and also fix confidential_customer_model hash
LPanosTT Jul 19, 2024
45a692b
fix galaxy sanity test
gfengTT Jul 19, 2024
1aa1667
Move binary models
vbrkicTT Jul 18, 2024
a3a756d
Documenting RGG test commands
vbrkicTT Jul 17, 2024
7616754
Debugging graph building errors
vbrkicTT Jul 17, 2024
45cc684
[Blackhole] Don't use top 16MB of dram channels when allocating queues
jserbedzijaTT Jul 23, 2024
0a81fa3
[TVM] Decompose repeat_interleave pytorch op and add sanity test
chandrasekaranpradeep Jul 24, 2024
685b51c
Update BBE submodule to week29
vmilosevic Jul 24, 2024
dcc62ed
Update tvm and demos subomdules
vmilosevic Jul 24, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions ci/gitlab-test-lists/.gitlab-ci.wormhole_b0_t3k_silicon_push.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
.backend-silicon-wh-b0-t3k-common:
extends: .backend-silicon-wh-b0-common
stage: sanity-wh-b0-t3k-silicon
tags:
- t3k
- push

pybuda-silicon-wh-b0-t3k-tti-data-parallel:
extends: .backend-silicon-wh-b0-t3k-common
script:
- !reference [.backend-silicon-wh-b0-t3k-common, script]
# Run this on x2 for now as a sanity test
# Move this to t3000 once we have more t3000 machines
# - source pybuda/test/benchmark/run_benchmark_tti_data_parallel
- PYBUDA_FORCE_THREADS=1 pytest -svv pybuda/test/tti/test_tti_data_parallel.py::test_tti_mmio_dp_sanity

2 changes: 1 addition & 1 deletion pybuda/csrc/backend_api/device_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ struct DeviceConfig
// TODO - get from backend, but backend needs to add it
return is_grayskull() ? 1 : 3;
}
std::uint32_t get_dram_channel_capacity() const { return get<std::uint32_t>("dram-channel_capacity", false); }
std::size_t get_dram_channel_capacity() const { return get<std::size_t>("dram-channel_capacity", false); }
std::size_t get_dram_bandwidth_per_block_theoretical() const
{
return get<std::size_t>("dram-bandwidth_per_block_theoretical", false);
Expand Down
4 changes: 3 additions & 1 deletion pybuda/csrc/backend_api/module.mk
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ BUDABACKEND_LIB = $(BUDABACKEND_LIBDIR)/libtt.so
BUDABACKEND_DEVICE = $(BUDABACKEND_LIBDIR)/libdevice.so
BUDABACKEND_NET2PIPE = third_party/budabackend/build/bin/net2pipe
BUDABACKEND_PIPEGEN = third_party/budabackend/build/bin/pipegen2
BUDABACKEND_BLOBGEN = third_party/budabackend/build/bin/blobgen2

PYBUDA_CSRC_BACKENDAPI_LIB = $(LIBDIR)/libbackend_api.a
PYBUDA_CSRC_BACKENDAPI_SRCS += \
Expand Down Expand Up @@ -45,8 +46,9 @@ $(BUDABACKEND_DEVICE): third_party/budabackend ;
$(BUDABACKEND_LIB): third_party/budabackend ;
$(BUDABACKEND_NET2PIPE): third_party/budabackend ;
$(BUDABACKEND_PIPEGEN): third_party/budabackend ;
$(BUDABACKEND_BLOBGEN): third_party/budabackend ;

third_party/budabackend/src/net2pipe: $(BUDABACKEND_NET2PIPE) $(BUDABACKEND_PIPEGEN) ;
third_party/budabackend/src/net2pipe: $(BUDABACKEND_NET2PIPE) $(BUDABACKEND_PIPEGEN) $(BUDABACKEND_BLOBGEN) ;

# Each module has a top level target as the entrypoint which must match the subdir name
pybuda/csrc/backend_api: $(PYBUDA_CSRC_BACKENDAPI_LIB) $(BUDABACKEND_LIB) $(BUDABACKEND_DEVICE) $(PYBUDA_CSRC_SHARED_UTILS_LIB) ;
Expand Down
32 changes: 28 additions & 4 deletions pybuda/csrc/buda_passes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
#include "passes/decomposing_context.hpp"
#include "passes/erase_consecutive_reshape.hpp"
#include "passes/erase_inverse_ops.hpp"
#include "passes/insert_inverse_outside_quantized_region.hpp"
#include "passes/erase_unnecessary_4d_tm_sequence.hpp"
#include "passes/explicate_unsqueeze.hpp"
#include "passes/fork_join.hpp"
Expand All @@ -34,7 +35,12 @@
#include "passes/lower_concat_to_runtime_transform.hpp"
#include "passes/lower_reinterpret_shape.hpp"
#include "passes/lowering_context.hpp"
#include "passes/move_dequantize.hpp"
#include "passes/move_requantize.hpp"
#include "passes/remove_quant_dequant.hpp"
#include "passes/insert_qdq_on_biases.hpp"
#include "passes/dequant_quant_to_requant.hpp"
#include "passes/make_quantized_ops.hpp"
#include "passes/move_select_after_matmul_optional.hpp"
#include "passes/pad_output_buffer.hpp"
#include "passes/passes_utils.hpp"
Expand Down Expand Up @@ -92,6 +98,18 @@ run_post_initial_graph_passes(graphlib::Graph *graph, py::object compiler_cfg_ob

passes::print_graph(graph, "INITIAL");
passes::generate_initial_flops_estimate(graph);
// These passes must be run in a loop as its possible that after
// Pushing a dequant through a conv/matmul/etc it can be moved down further
bool attempt_update = true;
while (attempt_update) {
attempt_update = passes::move_dequantize(graph);
attempt_update |= passes::make_quantized_ops(graph);
attempt_update |= passes::insert_qdq_on_biases(graph);
attempt_update |= passes::dequant_quant_to_requant(graph);
}

passes::remove_quant_dequant(graph);
reportify::dump_graph(graph->name(), "post_quantize_commute", graph);
passes::decompose_nd_reshape_split(graph);
passes::limit_to_4d_reshape(graph);
passes::erase_unnecessary_4d_tm_sequence(graph);
Expand Down Expand Up @@ -161,6 +179,15 @@ void run_optimization_graph_passes(graphlib::Graph *graph, const DeviceConfig &d
passes::bypass_nop_tms(graph);
}
}

// Move TMs outside of quantized graph regions
// attempt_update = true;
// while(attempt_update) {
// passes::insert_inverse_outside_quantized_region(graph);
// attempt_update = passes::erase_inverse_ops(graph);
// }


passes::move_tm_through_requantize(graph);
recalculate_shapes(graph);

Expand All @@ -177,7 +204,6 @@ void run_optimization_graph_passes(graphlib::Graph *graph, const DeviceConfig &d
passes::move_select_after_matmul_optional(graph);

passes::fuse_tm_sequences(graph);
reportify::dump_graph(graph->name(), "post_erase_inverse_ops", graph);
}

std::vector<std::pair<graphlib::NodeId, graphlib::NodeId>> run_post_optimize_decompose_graph_passes(
Expand Down Expand Up @@ -382,8 +408,7 @@ std::pair<std::unique_ptr<graphlib::Graph>, placer::PlacerConfigUpdate> run_pre_
// data parallel - insert nops and epoch breaks
if (env_as<bool>("PYBUDA_N300_DATA_PARALLEL"))
{
std::vector<std::string> dp_nops_to_epoch_break = insert_dataparallel_nops(lowered_graph.get());
op_names_to_epoch_break.push_back(dp_nops_to_epoch_break);
insert_dataparallel_nops(lowered_graph.get());
}

// At this point, there should be no more graph mutations.
Expand All @@ -397,7 +422,6 @@ std::pair<std::unique_ptr<graphlib::Graph>, placer::PlacerConfigUpdate> run_pre_
fracture_chip_id_assignments,
"" /* nops_remote_devices_postfix */,
use_interactive_placer);

return std::make_pair(std::move(lowered_graph), placer_config_update);
}

Expand Down
35 changes: 24 additions & 11 deletions pybuda/csrc/graph_lib/node_types.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -489,16 +489,17 @@ class PyOpNode : public OpNode {
void copy_parent_op_attributes(PyOpNode *node);
};

class BudaOpNode : public OpNode {

private:
tt::DataFormat accumulate_df_ = tt::DataFormat::Float16_b;
tt::DataFormat intermediate_df_ = tt::DataFormat::Float16_b;
tt::MathFidelity math_fidelity_ = tt::MathFidelity::HiFi3;
class BudaOpNode : public OpNode
{
private:
tt::DataFormat accumulate_df_ = tt::DataFormat::Float16_b;
tt::DataFormat intermediate_df_ = tt::DataFormat::Float16_b;
tt::MathFidelity math_fidelity_ = tt::MathFidelity::HiFi3;
std::shared_ptr<FusedOp> fused_op_ = nullptr;
bool buffering_op_ = false;
bool data_parallel_nop_ = false;

public:
public:
BudaOpNode(const std::string &name, const std::string &op_type) : OpNode(name, op_type, NodeType::kBudaOp) {}
BudaOpNode(const std::string &name, OpType op_type) : OpNode(name, op_type, NodeType::kBudaOp) {}

Expand All @@ -514,18 +515,30 @@ class BudaOpNode : public OpNode {
void copy_lowered_op_attributes(PyOpNode *node);
void copy_parent_op_attributes(BudaOpNode *node);

virtual std::unique_ptr<Node> clone(std::string const& name = "") override;
virtual std::unique_ptr<Node> clone(std::string const &name = "") override;

void set_fused_op(std::shared_ptr<FusedOp> fused_op) { fused_op_ = fused_op; }
bool is_fused_op() const { return fused_op_ != nullptr; }
std::shared_ptr<FusedOp> get_fused_op() const { TT_ASSERT(fused_op_ != nullptr); return fused_op_; }
std::shared_ptr<FusedOp> get_fused_op() const
{
TT_ASSERT(fused_op_ != nullptr);
return fused_op_;
}

void set_buffering_op(bool buffering_op) { buffering_op_ = buffering_op; }
bool is_buffering_op() const { return buffering_op_; }

#ifdef DEBUG
void set_data_parallel_nop(bool data_parallel_nop)
{
TT_ASSERT(!data_parallel_nop || "nop" == op_type().op);
data_parallel_nop_ = data_parallel_nop;
}

bool is_data_parallel_nop() const { return data_parallel_nop_; }

#ifdef DEBUG
std::shared_ptr<balancer::BudaOpNodeLegalizerFailureInfo> leg_debug_info = nullptr;
#endif
#endif
};

class BudaNaryTMNode : public Node
Expand Down
112 changes: 73 additions & 39 deletions pybuda/csrc/lower_to_buda/netlist.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -696,8 +696,21 @@ std::pair<int, int> get_epoch_allocate_deallocate(graphlib::Node *q, const place
}
}

// Find out the updated epoch id after inserting empty epochs, only applies to n300 data parallel
size_t get_updated_epoch_id(size_t epoch_id, const vector<size_t>& dp_epochs)
{
size_t num_of_insertions = 0;
for (size_t dp_epoch: dp_epochs)
{
if (epoch_id > dp_epoch)
num_of_insertions++;
}
return epoch_id + num_of_insertions;
}

std::vector<program::Program> create_programs(
Graph *graph, placer::PlacerSolution &placer_solution, BudaGraph &buda_graph, const std::string &arch_string)
Graph *graph, placer::PlacerSolution &placer_solution, BudaGraph &buda_graph, const std::string &arch_string,
const vector<size_t> &dp_epochs)
{
std::vector<program::Program> programs;

Expand Down Expand Up @@ -729,7 +742,7 @@ std::vector<program::Program> create_programs(
for (std::uint32_t epoch : epochs)
{
input_queues.push_back(graph->nodes(
[&graph, &placer_solution, epoch](Node *node)
[&graph, &placer_solution, epoch, &dp_epochs](Node *node)
{
if ((node->node_type() != graphlib::NodeType::kInput) &&
(node->node_type() != graphlib::NodeType::kQueue) &&
Expand All @@ -755,7 +768,7 @@ std::vector<program::Program> create_programs(
{
if (
// Our epoch
(placer_solution.name_to_op_placement.at(neighbour->name()).epoch_id() == epoch) &&
(get_updated_epoch_id(placer_solution.epoch_id(neighbour->name()), dp_epochs) == epoch) &&

(
// Input
Expand Down Expand Up @@ -799,7 +812,7 @@ std::vector<program::Program> create_programs(
for (std::uint32_t epoch : epochs)
{
parameter_queues.push_back(graph->nodes(
[&graph, &placer_solution, epoch](Node *node)
[&graph, &placer_solution, epoch, &dp_epochs](Node *node)
{
if (node->node_type() != graphlib::NodeType::kInput)
return false;
Expand All @@ -812,7 +825,7 @@ std::vector<program::Program> create_programs(
{
if (
// Our epoch
(placer_solution.name_to_op_placement.at(user->name()).epoch_id() == epoch) &&
(get_updated_epoch_id(placer_solution.epoch_id(user->name()), dp_epochs) == epoch) &&
((node->as<graphlib::InputNode>()->is_parameter()) ||
(node->as<graphlib::InputNode>()->is_constant())))
return true;
Expand All @@ -832,7 +845,7 @@ std::vector<program::Program> create_programs(
for (std::uint32_t epoch : epochs)
{
gradient_queues.push_back(graph->nodes(
[&graph, &placer_solution, epoch, have_opt_epochs](Node *node)
[&graph, &placer_solution, epoch, &dp_epochs, have_opt_epochs](Node *node)
{
if ((node->node_type() != graphlib::NodeType::kQueue) ||
(!node->as<graphlib::QueueNode>()->is_grad_accumulator()))
Expand All @@ -857,12 +870,12 @@ std::vector<program::Program> create_programs(
{
return
// Bwd
((placer_solution.name_to_op_placement.at(producer->name()).epoch_id() == epoch) &&
((get_updated_epoch_id(placer_solution.epoch_id(producer->name()), dp_epochs) == epoch) &&
producer->as<graphlib::BudaOpNode>()->is_gradient_op()) ||

// Optimizer
((consumer != nullptr) &&
(placer_solution.name_to_op_placement.at(consumer->name()).epoch_id() == epoch));
(get_updated_epoch_id(placer_solution.epoch_id(consumer->name()), dp_epochs) == epoch));
}
catch (std::out_of_range &e)
{
Expand Down Expand Up @@ -962,7 +975,7 @@ std::vector<program::Program> create_programs(
num_entries,
microbatch_size);
// Need to increment static queue rd/wtr ptrs as queue is persistant
uint32_t temporal_epoch_id = placer_solution.temporal_epoch_id(epoch);
uint32_t temporal_epoch_id = get_updated_epoch_id(placer_solution.temporal_epoch_id(epoch), dp_epochs);
const auto &[lptr, gptr] =
qvars(q, temporal_epoch_id, program::Variable::ShadowType::NONE, true);

Expand Down Expand Up @@ -998,7 +1011,7 @@ std::vector<program::Program> create_programs(
continue;
}

uint32_t temporal_epoch_id = placer_solution.temporal_epoch_id(epoch);
uint32_t temporal_epoch_id = get_updated_epoch_id(placer_solution.temporal_epoch_id(epoch), dp_epochs);
bool read_global;
if (q->as<graphlib::QueueNode>()->is_output())
{
Expand Down Expand Up @@ -1437,7 +1450,7 @@ static std::vector<std::size_t> get_input_dram_io_buf_size_tiles(
return input_dram_io_buf_size_tiles;
}

const int pipegen_available_dram_io_space_per_stream = free_l1_space / num_dram_readers;
const int pipegen_available_dram_io_space_per_stream = free_l1_space / num_dram_readers; // try /2 TODO
int current_stream_available_dram_io_space = pipegen_available_dram_io_space_per_stream;

for (std::size_t input_idx = 0; input_idx < operands.size(); ++input_idx)
Expand Down Expand Up @@ -1673,50 +1686,71 @@ BudaNetlist lower_to_buda_netlist(
}
}

size_t last_epoch_id = -1; // final epoch for dp, TODO
for (const auto& [key, value] : placer_solution.name_to_op_placement)
{
if (key.find("dp_nop") != std::string::npos)
{
last_epoch_id = value.epoch_id();
break;
}
}

for (size_t epoch_id = 0; epoch_id < buda_graph.epoch_types.size(); ++epoch_id)
vector<size_t> dp_epochs;
unordered_map<int, tt::placer::EpochInfo> epoch_info_map;
for (size_t epoch_id = 0; epoch_id < epoch_count; ++epoch_id)
{
int chip_id = placer_solution.epoch_id_to_chip.at(epoch_id);
if (env_as<bool>("PYBUDA_N300_DATA_PARALLEL") && epoch_id != last_epoch_id)
bool is_dp_epoch = false;
if (env_as<bool>("PYBUDA_N300_DATA_PARALLEL"))
{
buda_graph.epoch_target_devices.push_back({BudaDevice(0), BudaDevice(1)});
is_dp_epoch = true;
for (const placer::OpPlacement &placement: placer_solution.epoch_id_to_op_placement[epoch_id])
{
BudaOpNode* op_node = static_cast<BudaOpNode*>(graph->get_node_by_name(placement.name));
if (!op_node->is_data_parallel_nop())
{
is_dp_epoch = false;
break;
}
}

auto epoch_info = placer_solution.epoch_id_to_epoch_info.at(epoch_id);
epoch_info_map[epoch_id + dp_epochs.size()] = epoch_info;

if (is_dp_epoch)
{
dp_epochs.push_back(epoch_id);
TT_ASSERT(chip_id == 0, "MMIO ops are expected to be placed on chip 0");
buda_graph.epoch_target_devices.push_back({BudaDevice(chip_id)});

// insert an empty graph on the non-MMIO chip (1 by default)
buda_graph.ops.insert(buda_graph.ops.begin() + epoch_id + dp_epochs.size(), std::vector<BudaOp>());
buda_graph.epoch_types.insert(buda_graph.epoch_types.begin() + epoch_id + dp_epochs.size(), buda_graph.epoch_types.at(epoch_id));
buda_graph.epoch_target_devices.push_back({BudaDevice(1)});

epoch_info_map[epoch_id + dp_epochs.size()] = {
.global_epoch_id = epoch_info.global_epoch_id,
.temporal_epoch_id = epoch_info.temporal_epoch_id,
.spatial_epoch_id = 1,
.epoch_type = epoch_info.epoch_type
};
}
else
{
buda_graph.epoch_target_devices.push_back({BudaDevice(0), BudaDevice(1)});
}
}
else
{
buda_graph.epoch_target_devices.push_back({BudaDevice(chip_id)});
}

buda_graph.epoch_to_temporal_epoch_id.push_back(placer_solution.temporal_epoch_id(epoch_id));
buda_graph.epoch_to_subgraph_index.push_back(placer_solution.epoch_id_to_subgraph_index[epoch_id]);
if (is_dp_epoch)
{
buda_graph.epoch_to_temporal_epoch_id.push_back(placer_solution.temporal_epoch_id(epoch_id));
buda_graph.epoch_to_subgraph_index.push_back(placer_solution.epoch_id_to_subgraph_index[epoch_id]);
}
}

if (env_as<bool>("PYBUDA_N300_DATA_PARALLEL"))
{
// insert an empty graph for the last temporal epoch on chip 1 (non MMIO)
buda_graph.ops.push_back({});
buda_graph.epoch_types.push_back(buda_graph.epoch_types.back());
//buda_graph.epoch_types.push_back(graphlib::NodeEpochType::Forward);
buda_graph.epoch_target_devices.push_back({BudaDevice(1)});
buda_graph.epoch_to_temporal_epoch_id.push_back(buda_graph.epoch_to_temporal_epoch_id.back());
buda_graph.epoch_to_subgraph_index.push_back(0);

placer_solution.epoch_id_to_epoch_info[epoch_count] = {
.global_epoch_id=placer_solution.epoch_id_to_epoch_info[epoch_count-1].global_epoch_id,
.temporal_epoch_id=buda_graph.epoch_to_temporal_epoch_id.back(),
.spatial_epoch_id=1,
.epoch_type=buda_graph.epoch_types.back()
};
placer_solution.epoch_id_to_epoch_info = epoch_info_map;
}

net.programs = create_programs(graph, placer_solution, buda_graph, arch_string);
net.programs = create_programs(graph, placer_solution, buda_graph, arch_string, dp_epochs);
net.chip_ids = chip_ids;
net.arch_string = arch_string;

Expand Down
Loading