Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any way to replace Cupy from this code? #4

Open
bukhalmae145 opened this issue Apr 23, 2024 · 10 comments
Open

Is there any way to replace Cupy from this code? #4

bukhalmae145 opened this issue Apr 23, 2024 · 10 comments

Comments

@bukhalmae145
Copy link

I'm trying to adjust this code to use it on M1 Mac. M1 Mac doesn't support Cupy, but is there any allternatives to replace Cupy?

@NevermindNilas
Copy link

IIRC, softsplat, what GMFSS is based on uses Cupy, there have been previous attempts to hackintosh a method in which it doesn't utilize it but I didn't hear much about it past that.

I think one could replace GMFSS with Rife.

@routineLife1
Copy link
Owner

If you disable metricnet in the model, you can use forward_warp2 instead of softsplat, the effect may be slightly reduced , but can avoid Cupy dependency

@NevermindNilas
Copy link

Oh that's interesting 🤔

@bukhalmae145
Copy link
Author

If you disable metricnet in the model, you can use forward_warp2 instead of softsplat, the effect may be slightly reduced , but can avoid Cupy dependency

How do I disable it?

@routineLife1
Copy link
Owner

routineLife1 commented Apr 27, 2024

commit: e0ce5be
updated support for no_cupy version
The effect should not have much impact, but the speed will be much slower, about 1/3 of the original
There is also a faster and better way to use cuda programming to avoid cupy dependencies, but this should make porting to mac difficult.

@bukhalmae145
Copy link
Author

commit: e0ce5be updated support for no_cupy version The effect should not have much impact, but the speed will be much slower, about 1/3 of the original There is also a faster and better way to use cuda programming to avoid cupy dependencies, but this should make porting to mac difficult.

I'm so sorry for asking so many things, but will this repository be able to replace CUDA to MPS?

@routineLife1
Copy link
Owner

Sorry, I'm not familiar with MPS

@bukhalmae145
Copy link
Author

bukhalmae145 commented Apr 27, 2024

Sorry, I'm not familiar with MPS

python interpolate_video_forward_anyfps.py -i /Users/workstation/Movies/AFI-ForwardDeduplicate/input.mov -o /Users/workstation/Movies/AFI-ForwardDeduplicate/frames -nf 2 -fps 60 -m gmfss -s -st 12 -scale 1.0 -stf -c Traceback (most recent call last): File "/Users/workstation/Movies/AFI-ForwardDeduplicate/interpolate_video_forward_anyfps.py", line 115, in <module> model.load_model('weights/train_log_pg104', -1) File "/Users/workstation/Movies/AFI-ForwardDeduplicate/models/model_pg104/GMFSS.py", line 55, in load_model self.flownet.load_state_dict(torch.load('{}/flownet.pkl'.format(path))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1025, in load return _load(opened_zipfile, ^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1446, in _load result = unpickler.load() ^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1416, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 1390, in load_tensor wrap_storage=restore_location(storage, location), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 390, in default_restore_location result = fn(storage, location) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 265, in _cuda_deserialize device = validate_cuda_device(location) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/workstation/Movies/AFI-ForwardDeduplicate/Deduplication/lib/python3.11/site-packages/torch/serialization.py", line 249, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

What about adapting RIFE-NCNN-Vulkan instead of GMFSS? Would that help me to use MPS, not CUDA?

@routineLife1
Copy link
Owner

routineLife1 commented Apr 27, 2024

If vs-mlrt integrates this method, supporting Mac should be much simpler

@bukhalmae145
Copy link
Author

If vs-mlrt integrates this method, supporting Mac should be much simpler

I thought vs-mlrt supports M1 Mac with NCNN-vulkan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants