Python torch not compiled with cuda enabled

assertionerror torch not compiled with cuda enabled – Code Example

In this article we will see the code solutions for Pytorch assertionerror torch not compiled with cuda enabled.

Why this error occurs?

Cuda is a toolkit which allows GPU to take charge of applications and increase the performance. In order to work with it, it’s essential to have Cuda supported Nvidia GPU installed in your system. Also Pytorch should also support GPU acceleration.

This assertionerror occurs when we try to use cuda on Pytorch version which is for CPU only. So, you have two options to resolve this error –

  1. Use Pytorch version which is compatible to Cuda. Download right stable version from here.
  2. Disable Cuda from your code. This could turn out to be tricky as you might not be using Cuda directly but some of the library in your project may. So, you need to troubleshoot that.

Code Example

Error Code – Let’s first reproduce the error –

1. cuda passed as function parameter

import torch my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda") print(my_tensor)

The above code will throw error – assertionerror: torch not compiled with cuda enabled. Here is the complete output –

Traceback (most recent call last): File "C:/Users/aka/project/test.py", line 3, in my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda") File "C:\Users\aka\anaconda3\envs\deeplearning\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

This is because we set the flag device=»cuda» . If we change it to cpu like device=»cpu» then the error will disappear.

2. Dependency using pytorch function with cuda enabled

There are many pytorch functions which copy data to Cuda memory for faster performance. They are generally disabled by default but some dependency of your project could be using those functions and enabling them. So, you need to look into that dependency and disable from there.

For example, torch.utils.data.DataLoader class has parameter pin_memory which, according to pytorch documentation says –

pin_memory (bool, optional) – If True , the data loader will copy Tensors into device/CUDA pinned memory before returning them.

If a function using this class and setting pin_memory=true, then we will get torch not compiled with cuda enabled error.

Solutions

1. Check Pytorch version

Читайте также:  Php add zero before number

First of all check if you have installed the right version. Pytorch is available with or without Cuda.

PyTorch versions with and without CUDA

2. Check if Cuda is available in installed Pytorch

Use this code to check if cuda is available in your installed Pytorch –

print(torch.cuda.is_available())

3. Create new project environment

Due to a lot of troubleshooting and error handling to resolve bugs, we break our project environment. Try creating a new environment if it solves your Cuda error.

4. Using .cuda() function

Some pytorch functions could be run on GPU by passing them through .cuda() . For example, neural network sequential() function could be run on cuda. So, append or remove it according to your use case –

model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ])).cuda()

5. Provide correct device parameter

If a function expects a device parameter then you may provide cuda or cpu according to your use case –

import torch my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cpu") print(my_tensor)
  • MacOS Error zsh: Command not found: Python – Code Example
  • Refused to apply style MIME (‘text/html’) not supported…
  • PandasError: DataFrame constructor not properly called! -…
  • valueerror: list.remove(x): x not in list – Code Example
  • SyntaxError: Support for the experimental syntax jsx isn’t…
  • ESLint: TypeError: this.libOptions.parse is not a function -…
  • Check if string is color hex code in bash – Code Example
  • Golang run test cases of few files not whole project – Code…
  • TypeError: this.getOptions is not a function – Code Example
  • Python was not found; run without arguments to install from…
  • How to send delete request using CURL? Code Example
  • Get who is logged in on your system linux & windows bash -…
  • TypeError: a bytes-like object is required, not ‘str’ Python…
  • Uncaught ReferenceError: process is not defined ReactJS -…
  • valueerror could not convert string to float – Code Example

Источник

Saved searches

Use saved searches to filter your results more quickly

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Torch not compiled with CUDA enabled #30664

AssertionError: Torch not compiled with CUDA enabled #30664

oncall: binaries Anything related to official binaries that we release to users triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Читайте также:  Чем extends отличается от implements java

Comments

I’m trying to do neural style swapping, and for some reason, I keep getting the following errors.
AssertionError: Torch not compiled with CUDA enabled
File «c:\apps\Miniconda3\lib\site-packages\torch\nn\modules\module.py», line 260, in cuda
return self._apply(lambda t: t.cuda(device))
File «c:\apps\Miniconda3\lib\site-packages\torch\nn\modules\module.py», line 187, in _apply
module._apply(fn)
File «c:\apps\Miniconda3\lib\site-packages\torch\nn\modules\module.py», line 187, in _apply
module._apply(fn)
File «c:\apps\Miniconda3\lib\site-packages\torch\nn\modules\module.py», line 193, in _apply
param.data = fn(param.data)
File «c:\apps\Miniconda3\lib\site-packages\torch\nn\modules\module.py», line 260, in
return self.apply(lambda t: t.cuda(device))
File «c:\apps\Miniconda3\lib\site-packages\torch\cuda_init
.py», line 161, in _lazy_init
check_driver()
File «c:\apps\Miniconda3\lib\site-packages\torch\cuda_init
.py», line 75, in _check_driver
raise AssertionError(«Torch not compiled with CUDA enabled»)
AssertionError: Torch not compiled with CUDA enabled

I recently reinstalled conda and this has just been completely broken for me.

The text was updated successfully, but these errors were encountered:

zou3519 added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module oncall: binaries Anything related to official binaries that we release to users labels Dec 4, 2019

I am new to pytorch, and already installed pytorch in my Mac laptop.

trying to run the wsi_bert.py code, and get this error.
«»»
raise AssertionError(«Torch not compiled with CUDA enabled»)
AssertionError: Torch not compiled with CUDA enabled
«»»
Please help.
Thx a lot!

Try this.
conda install -c pytorch torchvision cudatoolkit=10.1 pytorch
Depending on what cuda version you have.

Note that unless you know otherwise, your Mac probably doesn’t have an NVIDIA GPU on it.

Источник

Torch not compiled with CUDA enabled in PyTorch

PyTorch is a popular open-source machine learning library for Python that is primarily used for developing and training deep learning models.

If you have an NVIDIA GPU and the necessary CUDA software installed on your machine, PyTorch can utilize the GPU to significantly speed up the training process.

Sometimes, you may encounter AssertionError: Torch not compiled with CUDA enabled error, which basically says that the PyTorch version you’re using does not support CUDA (There are separate versions with and without CUDA support.)

This article is going to show you a few possible ways to fix “Torch not compiled with CUDA enabled”. Because the underlying issue varies from setups to setups, you may have to try each of the solutions in the post until the error message goes away.

Completely reinstall PyTorch with CUDA

Uninstall current PyTorch

PyTorch can be installed in a few different ways. Therefore, there are various ways to uninstall it, depending on which package manager you’re using.

However, below is the combination of commands recommended by PyTorch themselves to fully uninstall the package, no matter which package manager installed it in the past.

conda uninstall pytorch pip uninstall torch pip uninstall torch # run this command twiceCode language: PHP (php)

Notice that you wouldhave to run pip uninstall torch multiple times (usually two). You’ll know torch is fully uninstalled when you see WARNING: Skipping torch as it is not installed .

Читайте также:  Collection to iterable in java

Install PyTorch with CUDA support

In order to install PyTorch with CUDA support, you need to have the following prerequisites:

  1. A CUDA-compatible NVIDIA GPU with drivers installed. You can check if your GPU is compatible in this CUDA-enabled GPUs list.
  2. The CUDA Toolkit, which includes the nvcc compiler and the cuda library. Those libraries are required to build PyTorch from source.
  3. The cuDNN library, which provides optimized implementations of starndard deep learning routines.

Once you have these prerequisites installed, install PyTorch with CUDA support by following these steps:

  1. Launch a terminal window and run pip install —upgrade pip . This ensure that you have the latest version of pip
  2. Go to https://pytorch.org/get-started/locally/ and choose the PyTorch Build, current operating system, package manager and CUDA version suitable to your setup. Then copy the generated command to your clipboard.
  3. Run the command you’ve just copied in a terminal/command prompt window. Answer yes to each prompt if needed.
  4. If you want to install a previous version of PyTorch, follow instructions at https://pytorch.org/get-started/previous-versions/.

Remove “cpuonly” package

In a few specific setups, uninstalling PyTorch itself is not enough. You may have to remove the “cpuonly” package, too.

If the solution above doesn’t work, try uninstalling both PyTorch and cpuonly package using these commands:

conda uninstall pytorch conda uninstall cpuonly pip uninstall pytorch --no-cache-dir pip uninstall cpuonly --no-cache-dir

Notice that we have to use —no-cache-dir in the commands to bypass pip cache, otherwise you will end up using the no-CUDA version already downloaded in your system.

Verify if CUDA is available

Once the new installation finishes, you may want to check if the GPU is available for PyTorch. Below is the recommended way to do this using is_available() (code from the PyTorch | Get Started page)

import torch torch.cuda.is_available()Code language: CSS (css)

If the command above returns False , you either:

  • Don’t have any GPU.
  • The Nvidia drivers aren’t loaded, thus the OS can’t recognize it.
  • The GPU is hidden by the environmental variable CUDA_VISIBLE_DEVICES . When CUDA_VISIBLE_DEVICES is set to -1 , all of your devices are hidden. This value may be printed out in your code using os.environ[‘CUDA_VISIBLE_DEVICES’]

In case torch.cuda.is_available() returns True , it does not necessarily imply that the GPU is being used. When you construct a device in Pytorch, you may assign it tensors. Tensors are assigned to the CPU by default.

We hope that the information above is useful and helped you successfully fix the AssertionError: Torch not compiled with CUDA enabled error.

If you have any questions, then please feel free to ask in the comments below.

Источник

Оцените статью