Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] SECURITY ERROR #4035

Open
WareZTv opened this issue Oct 23, 2024 · 1 comment
Open

[Bug] SECURITY ERROR #4035

WareZTv opened this issue Oct 23, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@WareZTv
Copy link

WareZTv commented Oct 23, 2024

Describe the bug

This error is not related to rocm because I have the same error with the device = cpu. The audio can still be generated but I see this error each time.

To Reproduce

from TTS.api import TTS

#Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
print(device)
is_rocm_pytorch = torch.version.hip
print(is_rocm_pytorch)


#List available 🐸TTS models
print(TTS().list_models())

#Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
print(tts)

Expected behavior

What chat gpt think is the ERROR:

The issues you mentioned in your message mainly show warnings regarding the use of certain features of libraries like PyTorch and Transformers. Here is an analysis of the reported issues in terms of severity:

  1. FutureWarning regarding torch.load:

    • Description: It is stated that using torch.load with weights_only=False can be risky as it allows arbitrary code execution during the deserialization of objects via pickle. This behavior will evolve in future releases of the library.
    • Severity: High, as this poses security risks if you load models from untrusted sources. It is advisable to start using weights_only=True if you do not have full control over the loaded files.
  2. Warning regarding GPT2InferenceModel and GenerationMixin:

    • Description: The emphasis is on the fact that GPT2InferenceModel does not directly derive from GenerationMixin, which means it could lose generation-related functionalities in future versions of the libraries. It is suggested to contact the code owner to make modifications.
    • Severity: Medium, as while this doesn't prevent the code from functioning now, it could lead to future incompatibilities and loss of features.

Conclusions:

  • It is indeed crucial to take into account the first warning regarding torch.load, as it raises potentially serious security concerns.
  • The second warning requires future attention to avoid applicability or compatibility issues with library updates, but it does not present the urgency to immediately correct the code.

Logs

/home/enzog/Documents/Python Projects/LLM/venv/lib/python3.9/site-packages/TTS/tts/layers/xtts/xtts_manager.py:5: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  self.speakers = torch.load(speaker_file_path)
/home/enzog/Documents/Python Projects/LLM/venv/lib/python3.9/site-packages/TTS/utils/io.py:54: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return torch.load(f, map_location=map_location, **kwargs)
GPT2InferenceModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.

Environment

{
    "CUDA": {
        "GPU": [
            "Radeon RX 7900 XTX"
        ],
        "available": true,
        "version": null
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.5.0+rocm6.2",
        "TTS": "0.22.0",
        "numpy": "1.22.0"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "x86_64",
        "python": "3.9.20",
        "version": "#47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024"
    }
}

Additional context

No response

@WareZTv WareZTv added the bug Something isn't working label Oct 23, 2024
@eginhard
Copy link
Contributor

This has already been fixed in the dev branch in our fork (this repo is not maintained anymore).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants