cuda不能正常训练,报错Async error was detected.

安装能够通过全部测试,cpu版本能够正常训练预测,只有使用jt.flags.use_cuda = 1后会出现如下报错:


在按照报错提示进行操作后仍然不能解决问题。代码见 GitHub - chen-zz20/MNIST 里的jittor版本。不知道哪里出了问题

1 个赞

您好,请提供完整的运行log,以确定系统环境等信息

e[38;5;2m[i 0117 03:55:21.043343 24 compiler.py:955] Jittor(1.3.6.10) src: /home/chenzz/.conda/envs/jittor/lib/python3.9/site-packages/jittore[m
e[38;5;2m[i 0117 03:55:21.046332 24 compiler.py:956] g++ at /usr/bin/g++(7.5.0)e[m
e[38;5;2m[i 0117 03:55:21.046389 24 compiler.py:957] cache_path: /home/chenzz/.cache/jittor/jt1.3.6/g++7.5.0/py3.9.5/Linux-5.3.18-5x50/AMDRyzen73700Xx25/defaulte[m
e[38;5;2m[i 0117 03:55:21.901889 24 install_cuda.py:93] cuda_driver_version: [11, 3]e[m
e[38;5;2m[i 0117 03:55:21.902185 24 install_cuda.py:81] restart /home/chenzz/.conda/envs/jittor/bin/python [‘main.py’]e[m
e[38;5;2m[i 0117 03:55:22.037619 60 compiler.py:955] Jittor(1.3.6.10) src: /home/chenzz/.conda/envs/jittor/lib/python3.9/site-packages/jittore[m
e[38;5;2m[i 0117 03:55:22.040869 60 compiler.py:956] g++ at /usr/bin/g++(7.5.0)e[m
e[38;5;2m[i 0117 03:55:22.040990 60 compiler.py:957] cache_path: /home/chenzz/.cache/jittor/jt1.3.6/g++7.5.0/py3.9.5/Linux-5.3.18-5x50/AMDRyzen73700Xx25/defaulte[m
e[38;5;2m[i 0117 03:55:22.867546 60 install_cuda.py:93] cuda_driver_version: [11, 3]e[m
e[38;5;2m[i 0117 03:55:22.872348 60 __init__.py:411] Found /home/chenzz/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc(11.2.152) at /home/chenzz/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc.e[m
e[38;5;2m[i 0117 03:55:22.876796 60 __init__.py:411] Found addr2line(2.35.1) at /usr/bin/addr2line.e[m
e[38;5;2m[i 0117 03:55:23.807325 60 compiler.py:1010] cuda key:cu11.2.152_sm_86e[m
e[38;5;2m[i 0117 03:55:23.970143 60 __init__.py:227] Total mem: 125.72GB, using 16 procs for compiling.e[m
e[38;5;2m[i 0117 03:55:24.052754 60 jit_compiler.cc:28] Load cc_path: /usr/bin/g++e[m
e[38;5;2m[i 0117 03:55:24.849488 60 init.cc:62] Found cuda archs: [86,]e[m
e[38;5;2m[i 0117 03:55:24.862853 60 compile_extern.py:522] mpicc not found, distribution disabled.e[m
e[38;5;2m[i 0117 03:55:25.774516 60 cuda_flags.cc:39] CUDA enabled.e[m
begin trainning

0%| | 0/10 [00:00<?, ?it/s]
0%| | 0/10 [00:01<?, ?it/s]
Traceback (most recent call last):
File “/home/chenzz/ANN/HW1/jittor/main.py”, line 70, in
train_loss , train_acc = train_epoch(model, train_loader, loss_metrics, optimizer)
File “/home/chenzz/ANN/HW1/jittor/model.py”, line 32, in train_epoch
for step, (inputs, labels) in enumerate(data_loader):
File “/home/chenzz/.conda/envs/jittor/lib/python3.9/site-packages/jittor/dataset/dataset.py”, line 543, in iter
batch = w.buffer.recv()
RuntimeError: e[38;5;1m[f 0117 03:55:27.360328 60 py_ring_buffer.cc:204] WorkerError: e[38;5;1m[f 0117 03:55:26.196096 60 executor.cc:666]
Execute fused operator(0/301) failed.
[JIT Source]: /home/chenzz/.cache/jittor/jt1.3.6/g++7.5.0/py3.9.5/Linux-5.3.18-5x50/AMDRyzen73700Xx25/default/cu11.2.152_sm_86/jit/getitem__Ti_float32__IDIM_2__ODIM_1__IV0_0__IO0__1__VS0__1__IV1__1__IO1_0__JIT_1__JIT_cpu____hash_1bda320c91ab4111_op.cc
[OP TYPE]: getitem.bool
[Input]: float32[60000,784,],
[Output]: float32[784,],
[Async Backtrace]: not found, please set env JT_SYNC=1, trace_py_var=3
[Reason]: e[38;5;1m[f 0117 03:55:26.195831 60 helper_cuda.h:128] CUDA error at /home/chenzz/.conda/envs/jittor/lib/python3.9/site-packages/jittor/src/mem/allocator/cuda_dual_allocator.h:112 code=3( cudaErrorInitializationError ) cudaMemcpy(mem_ptr, (void*)((int64)da.device_ptr+offset), size, cudaMemcpyDeviceToHost)e[me[me[m


Async error was detected. To locate the async backtrace and get better error report, please rerun your code with two enviroment variables set:

export JT_SYNC=1
export trace_py_var=3

我也有同样的问题

我遇到过类似问题,总是在execute operator的时候报错,并出现那几个export的提示。我的原因是显存不足,可能在性能界面上体现不出,因为jittor好像会一次分配大量显存。有的时候也会卡死在分配内存的函数那里。所以使用cpu不容易出现这个错误。通过减小网络、batch size有时有效