当前位置: 首页 > news >正文

做啊免费网站百度域名收录

做啊免费网站,百度域名收录,培训学校设计,会议网站CUDA code700(cudaErrorIllegalAddress) 报错与排查方法 最近笔者在调试自己写的 CUDA 代码时, 遇到了 code700(cudaErrorIllegalAddress) 的报错, 在此记录一下排查和解决方法. 报错 报错是由 CUDA API 函数执行时产生的, 由 checkCudaErrors() 函数检测出(CUDA 常用错误检…

CUDA code=700(cudaErrorIllegalAddress) 报错与排查方法

最近笔者在调试自己写的 CUDA 代码时, 遇到了 code=700(cudaErrorIllegalAddress) 的报错, 在此记录一下排查和解决方法.

报错

报错是由 CUDA API 函数执行时产生的, 由 checkCudaErrors() 函数检测出(CUDA 常用错误检测实现, 如下所示).

template <typename T>
void check(T result, char const *const func, const char *const file,int const line) {if (result) {fprintf(stderr, "CUDA error at %s:%d code=%d(%s) \"%s\" \n", file, line,static_cast<unsigned int>(result), cudaGetErrorName(result), func);exit(EXIT_FAILURE);}
}
#define checkCudaErrors(val) check((val), #val, __FILE__, __LINE__)

代码运行时报错如下所示, 显示是执行 cudaMemGetInfo() 函数时错误.

huanghy@node8:~/CL/src/cuda/build$ ./example 
[sample_cuda] start
[sample_kernel] grid_size:1, block_size:512, shm_size:6144
[sample_kernel] finished
CUDA error at /home/huanghy/CL/src/cuda/sample.cu:53 code=700(cudaErrorIllegalAddress) "cudaMemGetInfo(&freeMem, &totalMem)" 

原因

简单查阅资料可知, code=700(cudaErrorIllegalAddress) 的报错原因是 “an illegal memory access was encountered”, 即"遇到了一个非法的内存访问".

大多数情况下, 该问题产生都与数组越界访问的情况有关, 但值得一提的是, 往往报错的地方并不是问题实际存在的地方, 而由之前的 kernel 代码中的错误访问导致的.
比如, 此处报错是在 API 函数cudaMemGetInfo() 执行时, 也有可能是在自己定义的 kernel 函数执行时, 但可能一直排查当前报错的 kernel 不能解决问题的.

排查

一个很好的排查上述问题, 也是对自己的 CUDA 代码进行内存访问检查的方法是使用 CUDA 的 compute-sanitizer 工具.
该工具功能很多, 其中一个功能就是进行内存检测.

使用如下指令进行内存检查:

compute-sanitizer --launch-timeout=0 --tool=memcheck ./example > opt.txt 2>&1

其中, ./example 为检测的可执行文件. 由于输出可能比较多, 所以这里重定向到文件中. --launch-timeout=0 是将等待 kernel 加载的时间设置为无限, 以避免 compute-sanitizer 出现终止的情况, 如下所示.

========= COMPUTE-SANITIZER
========= Error: No attachable process found. compute-sanitizer timed-out.
========= Default timeout can be adjusted with --launch-timeout. Awaiting target completion.

最终 compute-sanitizer 会输出检测到的内存访问错误, 如下所示:

========= COMPUTE-SANITIZER
[sample_cuda] start
[sample_kernel] grid_size:1, block_size:512, shm_size:6144
========= Invalid __global__ write of size 4 bytes
=========     at 0x1190 in sample_kernel(int *, at::GenericPackedTensorAccessor<int, (unsigned long)1, at::RestrictPtrTraits, int>, at::GenericPackedTensorAccessor<int, (unsigned long)1, at::RestrictPtrTraits, int>, at::GenericPackedTensorAccessor<int, (unsigned long)1, at::RestrictPtrTraits, int>, at::GenericPackedTensorAccessor<int, (unsigned long)1, at::RestrictPtrTraits, int>, curandStateXORWOW *, unsigned int, int, unsigned int)
=========     by thread (32,0,0) in block (0,0,0)
=========     Address 0x7f40c00275a4 is out of bounds
=========     and is 23,461 bytes after the nearest allocation at 0x7f40c001fc00 of size 7,680 bytes
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame: [0x305c18]
=========                in /usr/lib/x86_64-linux-gnu/libcuda.so.1
=========     Host Frame: [0x1488c]
=========                in /usr/local/cuda-11.8/lib64/libcudart.so.11.0
=========     Host Frame:cudaLaunchKernel [0x6c318]
=========                in /usr/local/cuda-11.8/lib64/libcudart.so.11.0
=========     Host Frame:cudaError cudaLaunchKernel<char>(char const*, dim3, dim3, void**, unsigned long, CUstream_st*) [0x1f2f7]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:__device_stub__Z23sample_kernelPiN2at27GenericPackedTensorAccessorIiLm1ENS0_17RestrictPtrTraitsEiEES3_S3_S3_P17curandStateXORWOWjij(int*, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>&, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>&, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>&, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>&, curandStateXORWOW*, unsigned int, int, unsigned int) [0x1aec2]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:sample_kernel(int*, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>, at::GenericPackedTensorAccessor<int, 1ul, at::RestrictPtrTraits, int>, curandStateXORWOW*, unsigned int, int, unsigned int) [0x1af3a]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:sample_cuda(std::vector<at::Tensor, std::allocator<at::Tensor> >&, std::vector<CSR, std::allocator<CSR> >&, at::Tensor&, CSR const&, unsigned int, unsigned int, unsigned int, unsigned long long) [0x1a57d]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:sample(std::vector<at::Tensor, std::allocator<at::Tensor> >&, std::vector<CSR, std::allocator<CSR> >&, at::Tensor&, CSR const&, unsigned int, unsigned int, unsigned int, unsigned long long) [0x18900]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:main [0x87d0]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:__libc_start_main [0x21c87]
=========                in /lib/x86_64-linux-gnu/libc.so.6
=========     Host Frame:_start [0x804a]
=========                in /home/huanghy/CL/src/cuda/build/./example
========= 
=========  下面有很多重复单不同 threadIdx 和 blockIdx 的报错, 在此省略
========= 
[sample_kernel] finished
========= Program hit cudaErrorLaunchFailure (error 719) due to "unspecified launch failure" on CUDA API call to cudaMemGetInfo.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame: [0x4545f6]
=========                in /usr/lib/x86_64-linux-gnu/libcuda.so.1
=========     Host Frame:cudaMemGetInfo [0x533ab]
=========                in /usr/local/cuda-11.8/lib64/libcudart.so.11.0
=========     Host Frame:print_device_mem() [0x19796]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:sample_cuda(std::vector<at::Tensor, std::allocator<at::Tensor> >&, std::vector<CSR, std::allocator<CSR> >&, at::Tensor&, CSR const&, unsigned int, unsigned int, unsigned int, unsigned long long) [0x1a5bb]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:sample(std::vector<at::Tensor, std::allocator<at::Tensor> >&, std::vector<CSR, std::allocator<CSR> >&, at::Tensor&, CSR const&, unsigned int, unsigned int, unsigned int, unsigned long long) [0x18900]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:main [0x87d0]
=========                in /home/huanghy/CL/src/cuda/build/./example
=========     Host Frame:__libc_start_main [0x21c87]
=========                in /lib/x86_64-linux-gnu/libc.so.6
=========     Host Frame:_start [0x804a]
=========                in /home/huanghy/CL/src/cuda/build/./example
========= 
CUDA error at /home/huanghy/CL/src/cuda/sample.cu:53 code=719(cudaErrorLaunchFailure) "cudaMemGetInfo(&freeMem, &totalMem)" 
========= Target application returned an error
========= ERROR SUMMARY: 34 errors

在输出中, compute-sanitizer 会指明在具体的哪个 kernel 函数中发生了越界访问, 并指明相关的 threadIdx 和 blockIdx 以及内存地址.

以上述输出为例, 可以看到是在 sample_kernel() 函数中 threadIdx 为 (32,0,0) blockIdx (0,0,0) 处出现了 Address 0x7f40c00275a4 is out of bounds 的越界访问问题.

虽然地址信息很难让我们确定具体越界访问的位置, 但是通过该工具的输出, 可以确定到具体的 kernel 函数, 对于问题排查已经有了很大帮助.

参考

  • cuda - Unspecified launch failure on Memcpy - Stack Overflow
  • Compute-sanitizer not quite a drop-in replacement of cuda-memcheck - CUDA Developer Tools / Compute Sanitizer - NVIDIA Developer Forums
http://www.mmbaike.com/news/107193.html

相关文章:

  • 梵克雅宝香港官网提高seo排名
  • 做网站如何避免商标侵权成都本地推广平台
  • 网站关键字选择标准share群组链接分享
  • 湖北手机版建站系统价格网络营销专业好就业吗
  • 江苏建设网站seo教程 seo之家
  • 网站设计流程步骤个人接app推广单去哪里接
  • 城市分类信息网站系统网络推广视频
  • 微信 网站建设产品推广软文范文
  • 建设b2b网站需要多少钱谷歌浏览器app
  • 域名持有者个人可以做公司网站网络营销方案的制定
  • 比较好的能组数学卷的网站做教案的如何免费制作自己的网站
  • 照片视频制作小程序网站seo优化报告
  • 重庆品牌设计公司整站优化外包服务
  • 兼职做网站平台关键词优化最好的方法
  • 宁波最专业的seo公司seo网站推广教程
  • 如何做黑客攻击网站点击seo软件
  • 网站备案完成通知书青岛百度seo代理
  • 电商网站建设电话市场营销策划书
  • 潍坊精神文明建设网站怎么免费建公司网站
  • 天津市建设工程交易网旺道seo优化
  • 老师众筹网站开发怎么引流推广
  • 日本网站制作百度平台订单查询
  • zenme用ps做网站图百度健康
  • 怎么做服务器网站网站seo标题是什么意思
  • 网站做排名靠前2024年新冠第三波症状分析
  • 做自己的安卓交友网站百度网站大全旧版
  • 网站开发报价范围网站seo技术能不能赚钱
  • 网上赚钱一单一结网站营销策略的思路
  • 哪家做网站做得好郑州厉害的seo顾问公司
  • 建网站一条龙seo上海公司