use toolchain binaries use toolchain binaries Traceback (most recent call last): File "/data1/step1.py", line 8, in import tensorflow as tf ModuleNotFoundError: No module named 'tensorflow' (onnx1.13) root@6eed40e53a9a:/data1# conda activate base (base) root@6eed40e53a9a:/data1# python step1.py use toolchain binaries use toolchain binaries Using TensorFlow backend. Success for model "input/input" when running "general/Success" =========================================== = report on flow status = =========================================== kdp520 general FPS compiler frontend compiler_cfg compiler compiler hw info cpu_node wt_overhead (%) cmd_size(KB) wt_size(MB) Success clean_opt nef_model_id gen_fx_report model oversize post_clean onnx size (MB) category case input input 21.8093 ✓ ✓ ✓ ✓ KneronResize: up_sampling2d_1 9 3 9 ✓ ✓ 32768 ✓ ✓ ✓ 33 Npu performance evaluation result: docker_version: kneron/toolchain:v0.30.0 comments: kdp520/input bitwidth: int8 kdp520/output bitwidth: int8 kdp520/cpu bitwidth: int8 kdp520/datapath bitwidth: int8 kdp520/weight bitwidth: int8 kdp520/ip_eval/fps: 21.8093 kdp520/ip_eval/ITC(ms): 45.852 ms kdp520/ip_eval/RDMA bandwidth GB/s: 0.8 kdp520/ip_eval/WDMA bandwidth GB/s: 0.8 kdp520/ip_eval/GETW bandwidth GB/s: 0.8 kdp520/ip_eval/cpu_node: KneronResize: up_sampling2d_1 gen fx model report: model_fx_report.html gen fx model json: model_fx_report.json , node, node origin, type, node backend, 0, OutputNode_conv2d_10_o0, OutputNode_conv2d_10_o0, NPU, OutputNode_conv2d_10_o0, 1, OutputNode_conv2d_13_o0, OutputNode_conv2d_13_o0, NPU, OutputNode_conv2d_13_o0, 2, concatenate_1, concatenate_1, NPU, concatenate_1, 3, concatenate_1_KNOPT_dummy_bn_0, concatenate_1, NPU, concatenate_1_KNOPT_dummy_bn_0, 4, concatenate_1_KNOPT_dummy_bn_1, concatenate_1, NPU, concatenate_1_KNOPT_dummy_bn_1, 5, conv2d_1, conv2d_1, NPU, npu_fusion_node_conv2d_1_leaky_re_lu_1_max_pooling2d_1, 6, conv2d_10, conv2d_10, NPU, conv2d_10, 7, conv2d_11, conv2d_11, NPU, npu_fusion_node_conv2d_8_leaky_re_lu_8_KNERON_REFORMAT_next_0, 8, conv2d_12, conv2d_12, NPU, npu_fusion_node_conv2d_12_leaky_re_lu_11, 9, conv2d_13, conv2d_13, NPU, npu_fusion_node_conv2d_12_leaky_re_lu_11_KNERON_REFORMAT_next_0, 10, conv2d_2, conv2d_2, NPU, npu_fusion_node_conv2d_2_leaky_re_lu_2_max_pooling2d_2, 11, conv2d_3, conv2d_3, NPU, npu_fusion_node_conv2d_3_leaky_re_lu_3_max_pooling2d_3, 12, conv2d_4, conv2d_4, NPU, npu_fusion_node_conv2d_4_leaky_re_lu_4_max_pooling2d_4, 13, conv2d_5, conv2d_5, NPU, npu_fusion_node_conv2d_5_leaky_re_lu_5, 14, conv2d_6, conv2d_6, NPU, npu_fusion_node_conv2d_6_leaky_re_lu_6, 15, conv2d_7, conv2d_7, NPU, npu_fusion_node_conv2d_7_leaky_re_lu_7, 16, conv2d_8, conv2d_8, NPU, npu_fusion_node_conv2d_8_leaky_re_lu_8, 17, conv2d_9, conv2d_9, NPU, npu_fusion_node_conv2d_9_leaky_re_lu_9, 18, leaky_re_lu_1, leaky_re_lu_1, NPU, npu_fusion_node_conv2d_1_leaky_re_lu_1_max_pooling2d_1, 19, leaky_re_lu_10, leaky_re_lu_10, NPU, npu_fusion_node_conv2d_8_leaky_re_lu_8_KNERON_REFORMAT_next_0, 20, leaky_re_lu_11, leaky_re_lu_11, NPU, npu_fusion_node_conv2d_12_leaky_re_lu_11, 21, leaky_re_lu_2, leaky_re_lu_2, NPU, npu_fusion_node_conv2d_2_leaky_re_lu_2_max_pooling2d_2, 22, leaky_re_lu_3, leaky_re_lu_3, NPU, npu_fusion_node_conv2d_3_leaky_re_lu_3_max_pooling2d_3, 23, leaky_re_lu_4, leaky_re_lu_4, NPU, npu_fusion_node_conv2d_4_leaky_re_lu_4_max_pooling2d_4, 24, leaky_re_lu_5, leaky_re_lu_5, NPU, npu_fusion_node_conv2d_5_leaky_re_lu_5, 25, leaky_re_lu_6, leaky_re_lu_6, NPU, npu_fusion_node_conv2d_6_leaky_re_lu_6, 26, leaky_re_lu_7, leaky_re_lu_7, NPU, npu_fusion_node_conv2d_7_leaky_re_lu_7, 27, leaky_re_lu_8, leaky_re_lu_8, NPU, npu_fusion_node_conv2d_8_leaky_re_lu_8, 28, leaky_re_lu_9, leaky_re_lu_9, NPU, npu_fusion_node_conv2d_9_leaky_re_lu_9, 29, max_pooling2d_1, max_pooling2d_1, NPU, npu_fusion_node_conv2d_1_leaky_re_lu_1_max_pooling2d_1, 30, max_pooling2d_2, max_pooling2d_2, NPU, npu_fusion_node_conv2d_2_leaky_re_lu_2_max_pooling2d_2, 31, max_pooling2d_3, max_pooling2d_3, NPU, npu_fusion_node_conv2d_3_leaky_re_lu_3_max_pooling2d_3, 32, max_pooling2d_4, max_pooling2d_4, NPU, npu_fusion_node_conv2d_4_leaky_re_lu_4_max_pooling2d_4, 33, max_pooling2d_5, max_pooling2d_5, NPU, max_pooling2d_5, 34, max_pooling2d_6, max_pooling2d_6, NPU, max_pooling2d_6, 35, up_sampling2d_1, up_sampling2d_1, CPU, cpu_fusion_node_up_sampling2d_1, (1, 3, 416, 416) [2025-07-05 08:02:31] [debug] [Thread: 47] [/projects_src/kneron-piano_v2/dynasty/floating_point/floating_point/src/common/BaseInferencerImpl.cpp:63] start to create operators.... [2025-07-05 08:02:31] [debug] [Thread: 47] [/projects_src/kneron-piano_v2/dynasty/floating_point/floating_point/src/common/BaseInferencerImpl.cpp:102] The model allocated space: 186.042032 Mbytes( including workspace: 0.0 Mbytes) WARNING:tensorflow:From /workspace/miniconda/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py:1475: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From step1.py:28: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. (array([[258.8878 , 470.29477, 297.01447, 524.3068 ], [233.62656, 218.19919, 306.79242, 381.78162]], dtype=float32), array([0.9248919 , 0.78650415], dtype=float32), array([2, 7], dtype=int32)) processing image: /data1/test_image10/000000001000.jpg processing image: /data1/test_image10/000000001296.jpg processing image: /data1/test_image10/000000005001.jpg processing image: /data1/test_image10/000000000872.jpg processing image: /data1/test_image10/000000001268.jpg processing image: /data1/test_image10/000000000885.jpg processing image: /data1/test_image10/000000000785.jpg processing image: /data1/test_image10/000000000139.jpg processing image: /data1/test_image10/000000005193.jpg processing image: /data1/test_image10/309_190.jpg Failure for model "input/input" when running "kdp520/HW not support" =========================================== = report on flow status = =========================================== kdp520 general HW not support compiler_cfg Success clean_opt nef_model_id model oversize post_clean onnx size (MB) category case input input Err: 4 ✓ ✓ ✓ 32768 ✓ ✓ 33 Fix point analysis done. Save bie model to '/data1/kneron_flow/input.kdp520.scaled.bie' Traceback (most recent call last): File "step1.py", line 93, in out_data = ktc.kneron_inference([in_data], bie_file=bie_model_path, input_names=["input_1_o0"], platform=520) File "/workspace/miniconda/lib/python3.7/site-packages/ktc/inferencer.py", line 35, in kneron_inference res = e2e.kneron_inference(*args, **kwargs) File "/workspace/E2E_Simulator/python_flow/kneron_inference.py", line 72, in kneron_inference input_nodes, _, out_node_shape, d_ioinfo = get_model_io(bie, platform) File "/workspace/miniconda/lib/python3.7/site-packages/sys_flow/inference.py", line 76, in get_model_io p_onnx, hw_mode, dynasty_bin) File "/workspace/miniconda/lib/python3.7/site-packages/sys_flow/flow_utils.py", line 412, in get_ioinfo_from_bie assert p_j.exists(), f"output missing: {p_j}" AssertionError: output missing: /tmp/unpack_bie_fj20567k/SnrShapeInfo.json