Error when doing fpAnalyser
Hi there,
I was doing yolov3 project on docker and follow the steps provided on the document center
Steps before doing quantization seems to be okay for our model. But when I tried to run fpAnalyserCompilerIpevaluator_520.py
to do quantization, there's error message raised as follow:
/workspace/miniconda/lib/python3.7/site-packages/numpy/__init__.py:156: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service from . import _distributor_init [2021-06-12 08:45:43] [error] [Thread: 26] [/projects_src/kneron_piano/knerex/updater/include/base/process_node/BiasAdjustmentBase.hpp:397] Warning: not enough memory to calculate bias adjustment, stopped. Increase virtual memory or reduce the number of input images, or both, Or set less_DRAM_mode in dump_level in config file to re-run. Traceback (most recent call last): File "/workspace/scripts/fpAnalyserCompilerIpevaluator_520.py", line 43, in <module> bie_file = run_knerex(model_config, threads, 520) File "/workspace/scripts/utils/run_knerex.py", line 63, in run_knerex shutil.copy2(TMP_FOLDER + '/output.quan.wqbi.bie', output_path) File "/workspace/miniconda/lib/python3.7/shutil.py", line 266, in copy2 copyfile(src, dst, follow_symlinks=follow_symlinks) File "/workspace/miniconda/lib/python3.7/shutil.py", line 120, in copyfile with open(src, 'rb') as fsrc: FileNotFoundError: [Errno 2] No such file or directory: '/workspace/.tmp/output.quan.wqbi.bie'
which seems to be problem of memory insufficient, here's the things I've tried to eliminate the problem:
- run the docker with all my laptop's memory (which is 8G) and add swap memory
- reduce number of input images, to only 1 image
- tried to add
"dump_level": "less_DRAM_mode"
into the config fileinput_params.json
, but this step was only the attempt of myself, since I couldn't find the dump_level argument anywhere in the manual of KL520
All these attempts were failed, or maybe I have wrong config on dump_level, I can't see any solution to this.
Please help!
The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.
Comments
Once run with out of memory, the system could keep on some memory issues still.
Please exit from the docker environment, re-login and use 1 image case retry again.