diff --git a/sdk/vision/azure-ai-vision-imageanalysis/CHANGELOG.md b/sdk/vision/azure-ai-vision-imageanalysis/CHANGELOG.md index 47f7827b2d0a..4f8e13f2e0d5 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/CHANGELOG.md +++ b/sdk/vision/azure-ai-vision-imageanalysis/CHANGELOG.md @@ -1,14 +1,10 @@ # Release History -## 1.0.0b3 (Unreleased) +## 1.0.0b3 (2024-07-26) ### Features Added -### Breaking Changes - -### Bugs Fixed - -### Other Changes +Added support for Entra ID authentication. ## 1.0.0b2 (2024-02-09) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/README.md b/sdk/vision/azure-ai-vision-imageanalysis/README.md index e22bd18bb062..f7c53ed25d9c 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/README.md +++ b/sdk/vision/azure-ai-vision-imageanalysis/README.md @@ -30,19 +30,20 @@ Use the Image Analysis client library to: ```bash pip install azure-ai-vision-imageanalysis ``` -### Set environment variables -To authenticate the `ImageAnalysisClient`, you will need the endpoint and key from your Azure Computer Vision resource in the [Azure Portal](https://portal.azure.com). The code snippet below assumes these values are stored in environment variables: +### Create and authenticate the client + +#### Using API key + +To authenticate the `ImageAnalysisClient` using api key, you will need the endpoint and api key from your Azure Computer Vision resource in the [Azure Portal](https://portal.azure.com). The code snippet below assumes these values are stored in environment variables: * Set the environment variable `VISION_ENDPOINT` to the endpoint URL. It has the form `https://your-resource-name.cognitiveservices.azure.com`, where `your-resource-name` is your unique Azure Computer Vision resource name. * Set the environment variable `VISION_KEY` to the key. The key is a 32-character Hexadecimal number. -Note that the client library does not directly read these environment variable at run time. The endpoint and key must be provided to the constructor of `ImageAnalysisClient` in your code. The code snippet below reads environment variables to promote the practice of not hard-coding secrets in your source code. - -### Create and authenticate the client +>Note: The client library does not directly read these environment variable at run time. The endpoint and key must be provided to the constructor of `ImageAnalysisClient` in your code. The code snippet below reads environment variables to promote the practice of not hard-coding secrets in your source code. -Once you define the environment variables, this Python code will create and authenticate a synchronous `ImageAnalysisClient`: +Once you define the environment variables, this Python code will create and authenticate a synchronous `ImageAnalysisClient` using key: @@ -62,7 +63,8 @@ except KeyError: print("Set them before running this sample.") exit() -# Create an Image Analysis client for synchronous operations +# Create an Image Analysis client for synchronous operations, +# using API key authentication client = ImageAnalysisClient( endpoint=endpoint, credential=AzureKeyCredential(key) @@ -71,16 +73,58 @@ client = ImageAnalysisClient( +#### Using Entra ID + +You can also authenticate `ImageAnalysisClient` with [Entra ID](https://learn.microsoft.com/entra/fundamentals/whatis) using the [Azure Identity library](https://learn.microsoft.com/python/api/overview/azure/identity-readme?view=azure-python). To use the [DefaultAzureCredential](https://learn.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python) provider shown below, or other credential providers in this library, install the `azure-identity` package: + +```bash +pip install azure.identity +``` + +Assuming you defined the environment variable `VISION_ENDPOINT` mentioned above, this Python code will create and authenticate a synchronous `ImageAnalysisClient` using Entra ID: + + + +```python +import os +from azure.ai.vision.imageanalysis import ImageAnalysisClient +from azure.ai.vision.imageanalysis.models import VisualFeatures +from azure.identity import DefaultAzureCredential + +# Set the value of your computer vision endpoint as environment variable: +try: + endpoint = os.environ["VISION_ENDPOINT"] +except KeyError: + print("Missing environment variable 'VISION_ENDPOINT'.") + print("Set it before running this sample.") + exit() + +# Create an Image Analysis client for synchronous operations, +# using Entra ID authentication +client = ImageAnalysisClient( + endpoint=endpoint, + credential=DefaultAzureCredential(exclude_interactive_browser_credential=False), +) +``` + + + +### Creating an asynchronous client + A synchronous client supports synchronous analysis methods, meaning they will block until the service responds with analysis results. The code snippets below all use synchronous methods because it's easier for a getting-started guide. The SDK offers equivalent asynchronous APIs which are often preferred. To create an asynchronous client, do the following: -* Update the above code to import `ImageAnalysisClient` from the `aio` namespace: - ```python - from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient - ``` * Install the additional package [aiohttp](https://pypi.org/project/aiohttp/): ```bash pip install aiohttp ``` +* Update the above code to import `ImageAnalysisClient` from the `azure.ai.vision.imageanalysis.aio`: + ```python + from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient + ``` +* If you are using Entra ID authentication with `DefaultAzureCredential`, update the above code to import `DefaultAzureCredential` from `azure.identity.aio`: + ```python + from azure.identity.aio import DefaultAzureCredential + ``` ## Key concepts diff --git a/sdk/vision/azure-ai-vision-imageanalysis/assets.json b/sdk/vision/azure-ai-vision-imageanalysis/assets.json index 15e5b840654c..522f8d3785b3 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/assets.json +++ b/sdk/vision/azure-ai-vision-imageanalysis/assets.json @@ -2,5 +2,5 @@ "AssetsRepo": "Azure/azure-sdk-assets", "AssetsRepoPrefixPath": "python", "TagPrefix": "python/vision/azure-ai-vision-imageanalysis", - "Tag": "python/vision/azure-ai-vision-imageanalysis_c2497c4b3c" + "Tag": "python/vision/azure-ai-vision-imageanalysis_d907590ef4" } diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_client.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_client.py index f792d48b1768..95ffe98935a9 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_client.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_client.py @@ -7,7 +7,8 @@ # -------------------------------------------------------------------------- from copy import deepcopy -from typing import Any +from typing import Any, TYPE_CHECKING, Union +from typing_extensions import Self from azure.core import PipelineClient from azure.core.credentials import AzureKeyCredential @@ -18,6 +19,10 @@ from ._operations import ImageAnalysisClientOperationsMixin from ._serialization import Deserializer, Serializer +if TYPE_CHECKING: + # pylint: disable=unused-import,ungrouped-imports + from azure.core.credentials import TokenCredential + class ImageAnalysisClient(ImageAnalysisClientOperationsMixin): # pylint: disable=client-accepts-api-version-keyword """ImageAnalysisClient. @@ -25,14 +30,16 @@ class ImageAnalysisClient(ImageAnalysisClientOperationsMixin): # pylint: disabl :param endpoint: Azure AI Computer Vision endpoint (protocol and hostname, for example: https://:code:``.cognitiveservices.azure.com). Required. :type endpoint: str - :param credential: Credential needed for the client to connect to Azure. Required. - :type credential: ~azure.core.credentials.AzureKeyCredential + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials.TokenCredential :keyword api_version: The API version to use for this operation. Default value is "2023-10-01". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str """ - def __init__(self, endpoint: str, credential: AzureKeyCredential, **kwargs: Any) -> None: + def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCredential"], **kwargs: Any) -> None: _endpoint = "{endpoint}/computervision" self._config = ImageAnalysisClientConfiguration(endpoint=endpoint, credential=credential, **kwargs) _policies = kwargs.pop("policies", None) @@ -87,7 +94,7 @@ def send_request(self, request: HttpRequest, *, stream: bool = False, **kwargs: def close(self) -> None: self._client.close() - def __enter__(self) -> "ImageAnalysisClient": + def __enter__(self) -> Self: self._client.__enter__() return self diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_configuration.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_configuration.py index 213e99d21fe6..f743cd8decfa 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_configuration.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_configuration.py @@ -6,13 +6,17 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- -from typing import Any +from typing import Any, TYPE_CHECKING, Union from azure.core.credentials import AzureKeyCredential from azure.core.pipeline import policies from ._version import VERSION +if TYPE_CHECKING: + # pylint: disable=unused-import,ungrouped-imports + from azure.core.credentials import TokenCredential + class ImageAnalysisClientConfiguration: # pylint: disable=too-many-instance-attributes,name-too-long """Configuration for ImageAnalysisClient. @@ -23,14 +27,16 @@ class ImageAnalysisClientConfiguration: # pylint: disable=too-many-instance-att :param endpoint: Azure AI Computer Vision endpoint (protocol and hostname, for example: https://:code:``.cognitiveservices.azure.com). Required. :type endpoint: str - :param credential: Credential needed for the client to connect to Azure. Required. - :type credential: ~azure.core.credentials.AzureKeyCredential + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials.TokenCredential :keyword api_version: The API version to use for this operation. Default value is "2023-10-01". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str """ - def __init__(self, endpoint: str, credential: AzureKeyCredential, **kwargs: Any) -> None: + def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCredential"], **kwargs: Any) -> None: api_version: str = kwargs.pop("api_version", "2023-10-01") if endpoint is None: @@ -41,10 +47,18 @@ def __init__(self, endpoint: str, credential: AzureKeyCredential, **kwargs: Any) self.endpoint = endpoint self.credential = credential self.api_version = api_version + self.credential_scopes = kwargs.pop("credential_scopes", ["https://cognitiveservices.azure.com/.default"]) kwargs.setdefault("sdk_moniker", "ai-vision-imageanalysis/{}".format(VERSION)) self.polling_interval = kwargs.get("polling_interval", 30) self._configure(**kwargs) + def _infer_policy(self, **kwargs): + if isinstance(self.credential, AzureKeyCredential): + return policies.AzureKeyCredentialPolicy(self.credential, "Ocp-Apim-Subscription-Key", **kwargs) + if hasattr(self.credential, "get_token"): + return policies.BearerTokenCredentialPolicy(self.credential, *self.credential_scopes, **kwargs) + raise TypeError(f"Unsupported credential: {self.credential}") + def _configure(self, **kwargs: Any) -> None: self.user_agent_policy = kwargs.get("user_agent_policy") or policies.UserAgentPolicy(**kwargs) self.headers_policy = kwargs.get("headers_policy") or policies.HeadersPolicy(**kwargs) @@ -56,6 +70,4 @@ def _configure(self, **kwargs: Any) -> None: self.retry_policy = kwargs.get("retry_policy") or policies.RetryPolicy(**kwargs) self.authentication_policy = kwargs.get("authentication_policy") if self.credential and not self.authentication_policy: - self.authentication_policy = policies.AzureKeyCredentialPolicy( - self.credential, "Ocp-Apim-Subscription-Key", **kwargs - ) + self.authentication_policy = self._infer_policy(**kwargs) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_model_base.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_model_base.py index bd51cdeb4465..43fd8c7e9b1b 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_model_base.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_model_base.py @@ -5,8 +5,8 @@ # license information. # -------------------------------------------------------------------------- # pylint: disable=protected-access, arguments-differ, signature-differs, broad-except -# pyright: reportGeneralTypeIssues=false +import copy import calendar import decimal import functools @@ -14,11 +14,12 @@ import logging import base64 import re -import copy import typing +import enum import email.utils from datetime import datetime, date, time, timedelta, timezone from json import JSONEncoder +from typing_extensions import Self import isodate from azure.core.exceptions import DeserializationError from azure.core import CaseInsensitiveEnumMeta @@ -35,6 +36,7 @@ __all__ = ["SdkJSONEncoder", "Model", "rest_field", "rest_discriminator"] TZ_UTC = timezone.utc +_T = typing.TypeVar("_T") def _timedelta_as_isostr(td: timedelta) -> str: @@ -242,7 +244,7 @@ def _deserialize_date(attr: typing.Union[str, date]) -> date: # This must NOT use defaultmonth/defaultday. Using None ensure this raises an exception. if isinstance(attr, date): return attr - return isodate.parse_date(attr, defaultmonth=None, defaultday=None) + return isodate.parse_date(attr, defaultmonth=None, defaultday=None) # type: ignore def _deserialize_time(attr: typing.Union[str, time]) -> time: @@ -337,7 +339,7 @@ def _get_model(module_name: str, model_name: str): class _MyMutableMapping(MutableMapping[str, typing.Any]): # pylint: disable=unsubscriptable-object def __init__(self, data: typing.Dict[str, typing.Any]) -> None: - self._data = copy.deepcopy(data) + self._data = data def __contains__(self, key: typing.Any) -> bool: return key in self._data @@ -375,13 +377,14 @@ def get(self, key: str, default: typing.Any = None) -> typing.Any: except KeyError: return default - @typing.overload # type: ignore - def pop(self, key: str) -> typing.Any: # pylint: disable=no-member - ... + @typing.overload + def pop(self, key: str) -> typing.Any: ... + + @typing.overload + def pop(self, key: str, default: _T) -> _T: ... @typing.overload - def pop(self, key: str, default: typing.Any) -> typing.Any: - ... + def pop(self, key: str, default: typing.Any) -> typing.Any: ... def pop(self, key: str, default: typing.Any = _UNSET) -> typing.Any: if default is _UNSET: @@ -397,13 +400,11 @@ def clear(self) -> None: def update(self, *args: typing.Any, **kwargs: typing.Any) -> None: self._data.update(*args, **kwargs) - @typing.overload # type: ignore - def setdefault(self, key: str) -> typing.Any: - ... + @typing.overload + def setdefault(self, key: str, default: None = None) -> None: ... @typing.overload - def setdefault(self, key: str, default: typing.Any) -> typing.Any: - ... + def setdefault(self, key: str, default: typing.Any) -> typing.Any: ... def setdefault(self, key: str, default: typing.Any = _UNSET) -> typing.Any: if default is _UNSET: @@ -438,6 +439,8 @@ def _serialize(o, format: typing.Optional[str] = None): # pylint: disable=too-m return _serialize_bytes(o, format) if isinstance(o, decimal.Decimal): return float(o) + if isinstance(o, enum.Enum): + return o.value try: # First try datetime.datetime return _serialize_datetime(o, format) @@ -504,7 +507,7 @@ def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None: def copy(self) -> "Model": return Model(self.__dict__) - def __new__(cls, *args: typing.Any, **kwargs: typing.Any) -> "Model": # pylint: disable=unused-argument + def __new__(cls, *args: typing.Any, **kwargs: typing.Any) -> Self: # pylint: disable=unused-argument # we know the last three classes in mro are going to be 'Model', 'dict', and 'object' mros = cls.__mro__[:-3][::-1] # ignore model, dict, and object parents, and reverse the mro order attr_to_rest_field: typing.Dict[str, _RestField] = { # map attribute name to rest_field property @@ -546,7 +549,7 @@ def _deserialize(cls, data, exist_discriminators): return cls(data) discriminator = cls._get_discriminator(exist_discriminators) exist_discriminators.append(discriminator) - mapped_cls = cls.__mapping__.get(data.get(discriminator), cls) # pylint: disable=no-member + mapped_cls = cls.__mapping__.get(data.get(discriminator), cls) # pyright: ignore # pylint: disable=no-member if mapped_cls == cls: return cls(data) return mapped_cls._deserialize(data, exist_discriminators) # pylint: disable=protected-access @@ -563,7 +566,7 @@ def as_dict(self, *, exclude_readonly: bool = False) -> typing.Dict[str, typing. if exclude_readonly: readonly_props = [p._rest_name for p in self._attr_to_rest_field.values() if _is_readonly(p)] for k, v in self.items(): - if exclude_readonly and k in readonly_props: # pyright: ignore[reportUnboundVariable] + if exclude_readonly and k in readonly_props: # pyright: ignore continue is_multipart_file_input = False try: @@ -586,6 +589,64 @@ def _as_dict_value(v: typing.Any, exclude_readonly: bool = False) -> typing.Any: return v.as_dict(exclude_readonly=exclude_readonly) if hasattr(v, "as_dict") else v +def _deserialize_model(model_deserializer: typing.Optional[typing.Callable], obj): + if _is_model(obj): + return obj + return _deserialize(model_deserializer, obj) + + +def _deserialize_with_optional(if_obj_deserializer: typing.Optional[typing.Callable], obj): + if obj is None: + return obj + return _deserialize_with_callable(if_obj_deserializer, obj) + + +def _deserialize_with_union(deserializers, obj): + for deserializer in deserializers: + try: + return _deserialize(deserializer, obj) + except DeserializationError: + pass + raise DeserializationError() + + +def _deserialize_dict( + value_deserializer: typing.Optional[typing.Callable], + module: typing.Optional[str], + obj: typing.Dict[typing.Any, typing.Any], +): + if obj is None: + return obj + return {k: _deserialize(value_deserializer, v, module) for k, v in obj.items()} + + +def _deserialize_multiple_sequence( + entry_deserializers: typing.List[typing.Optional[typing.Callable]], + module: typing.Optional[str], + obj, +): + if obj is None: + return obj + return type(obj)(_deserialize(deserializer, entry, module) for entry, deserializer in zip(obj, entry_deserializers)) + + +def _deserialize_sequence( + deserializer: typing.Optional[typing.Callable], + module: typing.Optional[str], + obj, +): + if obj is None: + return obj + return type(obj)(_deserialize(deserializer, entry, module) for entry in obj) + + +def _sorted_annotations(types: typing.List[typing.Any]) -> typing.List[typing.Any]: + return sorted( + types, + key=lambda x: hasattr(x, "__name__") and x.__name__.lower() in ("str", "float", "int", "bool"), + ) + + def _get_deserialize_callable_from_annotation( # pylint: disable=R0911, R0915, R0912 annotation: typing.Any, module: typing.Optional[str], @@ -613,99 +674,70 @@ def _get_deserialize_callable_from_annotation( # pylint: disable=R0911, R0915, if rf: rf._is_model = True - def _deserialize_model(model_deserializer: typing.Optional[typing.Callable], obj): - if _is_model(obj): - return obj - return _deserialize(model_deserializer, obj) - - return functools.partial(_deserialize_model, annotation) + return functools.partial(_deserialize_model, annotation) # pyright: ignore except Exception: pass # is it a literal? try: - if annotation.__origin__ is typing.Literal: + if annotation.__origin__ is typing.Literal: # pyright: ignore return None except AttributeError: pass # is it optional? try: - if any(a for a in annotation.__args__ if a == type(None)): - if_obj_deserializer = _get_deserialize_callable_from_annotation( - next(a for a in annotation.__args__ if a != type(None)), module, rf - ) - - def _deserialize_with_optional(if_obj_deserializer: typing.Optional[typing.Callable], obj): - if obj is None: - return obj - return _deserialize_with_callable(if_obj_deserializer, obj) - - return functools.partial(_deserialize_with_optional, if_obj_deserializer) + if any(a for a in annotation.__args__ if a == type(None)): # pyright: ignore + if len(annotation.__args__) <= 2: # pyright: ignore + if_obj_deserializer = _get_deserialize_callable_from_annotation( + next(a for a in annotation.__args__ if a != type(None)), module, rf # pyright: ignore + ) + + return functools.partial(_deserialize_with_optional, if_obj_deserializer) + # the type is Optional[Union[...]], we need to remove the None type from the Union + annotation_copy = copy.copy(annotation) + annotation_copy.__args__ = [a for a in annotation_copy.__args__ if a != type(None)] # pyright: ignore + return _get_deserialize_callable_from_annotation(annotation_copy, module, rf) except AttributeError: pass + # is it union? if getattr(annotation, "__origin__", None) is typing.Union: - deserializers = [_get_deserialize_callable_from_annotation(arg, module, rf) for arg in annotation.__args__] - - def _deserialize_with_union(deserializers, obj): - for deserializer in deserializers: - try: - return _deserialize(deserializer, obj) - except DeserializationError: - pass - raise DeserializationError() + # initial ordering is we make `string` the last deserialization option, because it is often them most generic + deserializers = [ + _get_deserialize_callable_from_annotation(arg, module, rf) + for arg in _sorted_annotations(annotation.__args__) # pyright: ignore + ] return functools.partial(_deserialize_with_union, deserializers) try: - if annotation._name == "Dict": - value_deserializer = _get_deserialize_callable_from_annotation(annotation.__args__[1], module, rf) - - def _deserialize_dict( - value_deserializer: typing.Optional[typing.Callable], - obj: typing.Dict[typing.Any, typing.Any], - ): - if obj is None: - return obj - return {k: _deserialize(value_deserializer, v, module) for k, v in obj.items()} + if annotation._name == "Dict": # pyright: ignore + value_deserializer = _get_deserialize_callable_from_annotation( + annotation.__args__[1], module, rf # pyright: ignore + ) return functools.partial( _deserialize_dict, value_deserializer, + module, ) except (AttributeError, IndexError): pass try: - if annotation._name in ["List", "Set", "Tuple", "Sequence"]: - if len(annotation.__args__) > 1: - - def _deserialize_multiple_sequence( - entry_deserializers: typing.List[typing.Optional[typing.Callable]], - obj, - ): - if obj is None: - return obj - return type(obj)( - _deserialize(deserializer, entry, module) - for entry, deserializer in zip(obj, entry_deserializers) - ) + if annotation._name in ["List", "Set", "Tuple", "Sequence"]: # pyright: ignore + if len(annotation.__args__) > 1: # pyright: ignore entry_deserializers = [ - _get_deserialize_callable_from_annotation(dt, module, rf) for dt in annotation.__args__ + _get_deserialize_callable_from_annotation(dt, module, rf) + for dt in annotation.__args__ # pyright: ignore ] - return functools.partial(_deserialize_multiple_sequence, entry_deserializers) - deserializer = _get_deserialize_callable_from_annotation(annotation.__args__[0], module, rf) - - def _deserialize_sequence( - deserializer: typing.Optional[typing.Callable], - obj, - ): - if obj is None: - return obj - return type(obj)(_deserialize(deserializer, entry, module) for entry in obj) - - return functools.partial(_deserialize_sequence, deserializer) + return functools.partial(_deserialize_multiple_sequence, entry_deserializers, module) + deserializer = _get_deserialize_callable_from_annotation( + annotation.__args__[0], module, rf # pyright: ignore + ) + + return functools.partial(_deserialize_sequence, deserializer, module) except (TypeError, IndexError, AttributeError, SyntaxError): pass @@ -787,6 +819,10 @@ def __init__( self._format = format self._is_multipart_file_input = is_multipart_file_input + @property + def _class_type(self) -> typing.Any: + return getattr(self._type, "args", [None])[0] + @property def _rest_name(self) -> str: if self._rest_name_input is None: @@ -847,5 +883,6 @@ def rest_discriminator( *, name: typing.Optional[str] = None, type: typing.Optional[typing.Callable] = None, # pylint: disable=redefined-builtin + visibility: typing.Optional[typing.List[str]] = None, ) -> typing.Any: - return _RestField(name=name, type=type, is_discriminator=True) + return _RestField(name=name, type=type, is_discriminator=True, visibility=visibility) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_operations/_operations.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_operations/_operations.py index b004fca9f53f..50aa5ad44859 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_operations/_operations.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_operations/_operations.py @@ -9,7 +9,7 @@ from io import IOBase import json import sys -from typing import Any, Callable, Dict, IO, List, Optional, TypeVar, Union, overload +from typing import Any, Callable, Dict, IO, List, Optional, Type, TypeVar, Union, overload from azure.core.exceptions import ( ClientAuthenticationError, @@ -123,10 +123,11 @@ def build_image_analysis_analyze_from_url_request( # pylint: disable=name-too-l class ImageAnalysisClientOperationsMixin(ImageAnalysisClientMixinABC): + @distributed_trace def _analyze_from_image_data( self, - image_content: bytes, + image_data: bytes, *, visual_features: List[Union[str, _models.VisualFeatures]], language: Optional[str] = None, @@ -135,11 +136,10 @@ def _analyze_from_image_data( model_version: Optional[str] = None, **kwargs: Any ) -> _models.ImageAnalysisResult: - # pylint: disable=line-too-long """Performs a single Image Analysis operation. - :param image_content: The image to be analyzed. Required. - :type image_content: bytes + :param image_data: The image to be analyzed. Required. + :type image_data: bytes :keyword visual_features: A list of visual features to analyze. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. @@ -180,33 +180,25 @@ def _analyze_from_image_data( # response body for status code(s): 200 response == { "metadata": { - "height": 0, # The height of the image in pixels. Required. - "width": 0 # The width of the image in pixels. Required. + "height": 0, + "width": 0 }, - "modelVersion": "str", # The cloud AI model used for the analysis. Required. + "modelVersion": "str", "captionResult": { - "confidence": 0.0, # A score, in the range of 0 to 1 (inclusive), - representing the confidence that this description is accurate. Higher values - indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" }, "denseCaptionsResult": { "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this description is - accurate. Higher values indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" } ] }, @@ -214,23 +206,15 @@ def _analyze_from_image_data( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, "tags": [ { - "confidence": 0.0, # A score, in the - range of 0 to 1 (inclusive), representing the confidence that - this entity was observed. Higher values indicating higher - confidence. Required. - "name": "str" # Name of the entity. - Required. + "confidence": 0.0, + "name": "str" } ] } @@ -240,18 +224,12 @@ def _analyze_from_image_data( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0 # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this detection was - accurate. Higher values indicating higher confidence. Required. + "confidence": 0.0 } ] }, @@ -262,39 +240,23 @@ def _analyze_from_image_data( { "boundingPolygon": [ { - "x": 0, # The - horizontal x-coordinate of this point, in pixels. - Zero values corresponds to the left-most pixels in - the image. Required. - "y": 0 # The - vertical y-coordinate of this point, in pixels. Zero - values corresponds to the top-most pixels in the - image. Required. + "x": 0, + "y": 0 } ], - "text": "str", # Text content of the - detected text line. Required. + "text": "str", "words": [ { "boundingPolygon": [ { "x": - 0, # The horizontal x-coordinate of this - point, in pixels. Zero values corresponds to - the left-most pixels in the image. Required. + 0, "y": - 0 # The vertical y-coordinate of this point, - in pixels. Zero values corresponds to the - top-most pixels in the image. Required. + 0 } ], - "confidence": 0.0, # - The level of confidence that the word was detected. - Confidence scores span the range of 0.0 to 1.0 - (inclusive), with higher values indicating a higher - confidence of detection. Required. - "text": "str" # Text - content of the word. Required. + "confidence": 0.0, + "text": "str" } ] } @@ -305,21 +267,12 @@ def _analyze_from_image_data( "smartCropsResult": { "values": [ { - "aspectRatio": 0.0, # The aspect ratio of the crop - region. Aspect ratio is calculated by dividing the width of the - region in pixels by its height in pixels. The aspect ratio will be in - the range 0.75 to 1.8 (inclusive) if provided by the developer during - the analyze call. Otherwise, it will be in the range 0.5 to 2.0 - (inclusive). Required. + "aspectRatio": 0.0, "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 } } ] @@ -327,16 +280,14 @@ def _analyze_from_image_data( "tagsResult": { "values": [ { - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this entity was - observed. Higher values indicating higher confidence. Required. - "name": "str" # Name of the entity. Required. + "confidence": 0.0, + "name": "str" } ] } } """ - error_map = { + error_map: MutableMapping[int, Type[HttpResponseError]] = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, @@ -350,7 +301,7 @@ def _analyze_from_image_data( content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) cls: ClsType[_models.ImageAnalysisResult] = kwargs.pop("cls", None) - _content = image_content + _content = image_data _request = build_image_analysis_analyze_from_image_data_request( visual_features=visual_features, @@ -395,7 +346,7 @@ def _analyze_from_image_data( @overload def _analyze_from_url( # pylint: disable=protected-access self, - image_content: _models._models.ImageUrl, + image_url: _models._models.ImageUrl, *, visual_features: List[Union[str, _models.VisualFeatures]], content_type: str = "application/json", @@ -404,13 +355,11 @@ def _analyze_from_url( # pylint: disable=protected-access smart_crops_aspect_ratios: Optional[List[float]] = None, model_version: Optional[str] = None, **kwargs: Any - ) -> _models.ImageAnalysisResult: - ... - + ) -> _models.ImageAnalysisResult: ... @overload def _analyze_from_url( self, - image_content: JSON, + image_url: JSON, *, visual_features: List[Union[str, _models.VisualFeatures]], content_type: str = "application/json", @@ -419,13 +368,11 @@ def _analyze_from_url( smart_crops_aspect_ratios: Optional[List[float]] = None, model_version: Optional[str] = None, **kwargs: Any - ) -> _models.ImageAnalysisResult: - ... - + ) -> _models.ImageAnalysisResult: ... @overload def _analyze_from_url( self, - image_content: IO[bytes], + image_url: IO[bytes], *, visual_features: List[Union[str, _models.VisualFeatures]], content_type: str = "application/json", @@ -434,13 +381,12 @@ def _analyze_from_url( smart_crops_aspect_ratios: Optional[List[float]] = None, model_version: Optional[str] = None, **kwargs: Any - ) -> _models.ImageAnalysisResult: - ... + ) -> _models.ImageAnalysisResult: ... @distributed_trace def _analyze_from_url( self, - image_content: Union[_models._models.ImageUrl, JSON, IO[bytes]], + image_url: Union[_models._models.ImageUrl, JSON, IO[bytes]], *, visual_features: List[Union[str, _models.VisualFeatures]], language: Optional[str] = None, @@ -449,12 +395,11 @@ def _analyze_from_url( model_version: Optional[str] = None, **kwargs: Any ) -> _models.ImageAnalysisResult: - # pylint: disable=line-too-long """Performs a single Image Analysis operation. - :param image_content: The image to be analyzed. Is one of the following types: ImageUrl, JSON, + :param image_url: The image to be analyzed. Is one of the following types: ImageUrl, JSON, IO[bytes] Required. - :type image_content: ~azure.ai.vision.imageanalysis.models.ImageUrl or JSON or IO[bytes] + :type image_url: ~azure.ai.vision.imageanalysis.models._models.ImageUrl or JSON or IO[bytes] :keyword visual_features: A list of visual features to analyze. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. @@ -493,40 +438,32 @@ def _analyze_from_url( .. code-block:: python # JSON input template you can fill out and use as your body input. - image_content = { - "url": "str" # Publicly reachable URL of an image to analyze. Required. + image_url = { + "url": "str" } # response body for status code(s): 200 response == { "metadata": { - "height": 0, # The height of the image in pixels. Required. - "width": 0 # The width of the image in pixels. Required. + "height": 0, + "width": 0 }, - "modelVersion": "str", # The cloud AI model used for the analysis. Required. + "modelVersion": "str", "captionResult": { - "confidence": 0.0, # A score, in the range of 0 to 1 (inclusive), - representing the confidence that this description is accurate. Higher values - indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" }, "denseCaptionsResult": { "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this description is - accurate. Higher values indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" } ] }, @@ -534,23 +471,15 @@ def _analyze_from_url( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, "tags": [ { - "confidence": 0.0, # A score, in the - range of 0 to 1 (inclusive), representing the confidence that - this entity was observed. Higher values indicating higher - confidence. Required. - "name": "str" # Name of the entity. - Required. + "confidence": 0.0, + "name": "str" } ] } @@ -560,18 +489,12 @@ def _analyze_from_url( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0 # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this detection was - accurate. Higher values indicating higher confidence. Required. + "confidence": 0.0 } ] }, @@ -582,39 +505,23 @@ def _analyze_from_url( { "boundingPolygon": [ { - "x": 0, # The - horizontal x-coordinate of this point, in pixels. - Zero values corresponds to the left-most pixels in - the image. Required. - "y": 0 # The - vertical y-coordinate of this point, in pixels. Zero - values corresponds to the top-most pixels in the - image. Required. + "x": 0, + "y": 0 } ], - "text": "str", # Text content of the - detected text line. Required. + "text": "str", "words": [ { "boundingPolygon": [ { "x": - 0, # The horizontal x-coordinate of this - point, in pixels. Zero values corresponds to - the left-most pixels in the image. Required. + 0, "y": - 0 # The vertical y-coordinate of this point, - in pixels. Zero values corresponds to the - top-most pixels in the image. Required. + 0 } ], - "confidence": 0.0, # - The level of confidence that the word was detected. - Confidence scores span the range of 0.0 to 1.0 - (inclusive), with higher values indicating a higher - confidence of detection. Required. - "text": "str" # Text - content of the word. Required. + "confidence": 0.0, + "text": "str" } ] } @@ -625,21 +532,12 @@ def _analyze_from_url( "smartCropsResult": { "values": [ { - "aspectRatio": 0.0, # The aspect ratio of the crop - region. Aspect ratio is calculated by dividing the width of the - region in pixels by its height in pixels. The aspect ratio will be in - the range 0.75 to 1.8 (inclusive) if provided by the developer during - the analyze call. Otherwise, it will be in the range 0.5 to 2.0 - (inclusive). Required. + "aspectRatio": 0.0, "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 } } ] @@ -647,16 +545,14 @@ def _analyze_from_url( "tagsResult": { "values": [ { - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this entity was - observed. Higher values indicating higher confidence. Required. - "name": "str" # Name of the entity. Required. + "confidence": 0.0, + "name": "str" } ] } } """ - error_map = { + error_map: MutableMapping[int, Type[HttpResponseError]] = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, @@ -672,10 +568,10 @@ def _analyze_from_url( content_type = content_type or "application/json" _content = None - if isinstance(image_content, (IOBase, bytes)): - _content = image_content + if isinstance(image_url, (IOBase, bytes)): + _content = image_url else: - _content = json.dumps(image_content, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + _content = json.dumps(image_url, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore _request = build_image_analysis_analyze_from_url_request( visual_features=visual_features, diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_patch.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_patch.py index 4a4ed79b677c..e89f5197c4e8 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_patch.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_patch.py @@ -20,8 +20,10 @@ class ImageAnalysisClient(ImageAnalysisClientGenerated): :param endpoint: Azure AI Computer Vision endpoint (protocol and hostname, for example: https://:code:``.cognitiveservices.azure.com). Required. :type endpoint: str - :param credential: Credential needed for the client to connect to Azure. Required. - :type credential: ~azure.core.credentials.AzureKeyCredential + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials.TokenCredential :keyword api_version: The API version to use for this operation. Default value is "2023-10-01". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str @@ -71,14 +73,14 @@ def analyze_from_url( :paramtype model_version: str :return: ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping :rtype: ~azure.ai.vision.imageanalysis.models.ImageAnalysisResult - :raises: ~azure.core.exceptions.HttpResponseError + :raises ~azure.core.exceptions.HttpResponseError: """ visual_features_impl: List[Union[str, _models.VisualFeatures]] = list(visual_features) return ImageAnalysisClientOperationsMixin._analyze_from_url( # pylint: disable=protected-access self, - image_content=_models._models.ImageUrl(url=image_url), # pylint: disable=protected-access + image_url=_models._models.ImageUrl(url=image_url), # pylint: disable=protected-access visual_features=visual_features_impl, language=language, gender_neutral_caption=gender_neutral_caption, @@ -87,7 +89,6 @@ def analyze_from_url( **kwargs ) - @distributed_trace def analyze( self, @@ -132,14 +133,14 @@ def analyze( :paramtype model_version: str :return: ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping :rtype: ~azure.ai.vision.imageanalysis.models.ImageAnalysisResult - :raises: ~azure.core.exceptions.HttpResponseError + :raises ~azure.core.exceptions.HttpResponseError: """ visual_features_impl: List[Union[str, _models.VisualFeatures]] = list(visual_features) return ImageAnalysisClientOperationsMixin._analyze_from_image_data( # pylint: disable=protected-access self, - image_content=image_data, + image_data=image_data, visual_features=visual_features_impl, language=language, gender_neutral_caption=gender_neutral_caption, diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_serialization.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_serialization.py index baa661cb82d2..8139854b97bb 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_serialization.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/_serialization.py @@ -144,6 +144,8 @@ def _json_attemp(data): # context otherwise. _LOGGER.critical("Wasn't XML not JSON, failing") raise DeserializationError("XML is invalid") from err + elif content_type.startswith("text/"): + return data_as_str raise DeserializationError("Cannot deserialize content-type: {}".format(content_type)) @classmethod @@ -170,13 +172,6 @@ def deserialize_from_http_generics(cls, body_bytes: Optional[Union[AnyStr, IO]], return None -try: - basestring # type: ignore - unicode_str = unicode # type: ignore -except NameError: - basestring = str - unicode_str = str - _LOGGER = logging.getLogger(__name__) try: @@ -545,7 +540,7 @@ class Serializer(object): "multiple": lambda x, y: x % y != 0, } - def __init__(self, classes: Optional[Mapping[str, Type[ModelType]]] = None): + def __init__(self, classes: Optional[Mapping[str, type]] = None): self.serialize_type = { "iso-8601": Serializer.serialize_iso, "rfc-1123": Serializer.serialize_rfc, @@ -561,7 +556,7 @@ def __init__(self, classes: Optional[Mapping[str, Type[ModelType]]] = None): "[]": self.serialize_iter, "{}": self.serialize_dict, } - self.dependencies: Dict[str, Type[ModelType]] = dict(classes) if classes else {} + self.dependencies: Dict[str, type] = dict(classes) if classes else {} self.key_transformer = full_restapi_key_transformer self.client_side_validation = True @@ -649,7 +644,7 @@ def _serialize(self, target_obj, data_type=None, **kwargs): else: # That's a basic type # Integrate namespace if necessary local_node = _create_xml_node(xml_name, xml_prefix, xml_ns) - local_node.text = unicode_str(new_attr) + local_node.text = str(new_attr) serialized.append(local_node) # type: ignore else: # JSON for k in reversed(keys): # type: ignore @@ -994,7 +989,7 @@ def serialize_object(self, attr, **kwargs): return self.serialize_basic(attr, self.basic_types[obj_type], **kwargs) if obj_type is _long_type: return self.serialize_long(attr) - if obj_type is unicode_str: + if obj_type is str: return self.serialize_unicode(attr) if obj_type is datetime.datetime: return self.serialize_iso(attr) @@ -1370,7 +1365,7 @@ class Deserializer(object): valid_date = re.compile(r"\d{4}[-]\d{2}[-]\d{2}T\d{2}:\d{2}:\d{2}" r"\.?\d*Z?[-+]?[\d{2}]?:?[\d{2}]?") - def __init__(self, classes: Optional[Mapping[str, Type[ModelType]]] = None): + def __init__(self, classes: Optional[Mapping[str, type]] = None): self.deserialize_type = { "iso-8601": Deserializer.deserialize_iso, "rfc-1123": Deserializer.deserialize_rfc, @@ -1390,7 +1385,7 @@ def __init__(self, classes: Optional[Mapping[str, Type[ModelType]]] = None): "duration": (isodate.Duration, datetime.timedelta), "iso-8601": (datetime.datetime), } - self.dependencies: Dict[str, Type[ModelType]] = dict(classes) if classes else {} + self.dependencies: Dict[str, type] = dict(classes) if classes else {} self.key_extractors = [rest_key_extractor, xml_key_extractor] # Additional properties only works if the "rest_key_extractor" is used to # extract the keys. Making it to work whatever the key extractor is too much @@ -1443,12 +1438,12 @@ def _deserialize(self, target_obj, data): response, class_name = self._classify_target(target_obj, data) - if isinstance(response, basestring): + if isinstance(response, str): return self.deserialize_data(data, response) elif isinstance(response, type) and issubclass(response, Enum): return self.deserialize_enum(data, response) - if data is None: + if data is None or data is CoreNull: return data try: attributes = response._attribute_map # type: ignore @@ -1514,14 +1509,14 @@ def _classify_target(self, target, data): if target is None: return None, None - if isinstance(target, basestring): + if isinstance(target, str): try: target = self.dependencies[target] except KeyError: return target, target try: - target = target._classify(data, self.dependencies) + target = target._classify(data, self.dependencies) # type: ignore except AttributeError: pass # Target is not a Model, no classify return target, target.__class__.__name__ # type: ignore @@ -1577,7 +1572,7 @@ def _unpack_content(raw_data, content_type=None): if hasattr(raw_data, "_content_consumed"): return RawDeserializer.deserialize_from_http_generics(raw_data.text, raw_data.headers) - if isinstance(raw_data, (basestring, bytes)) or hasattr(raw_data, "read"): + if isinstance(raw_data, (str, bytes)) or hasattr(raw_data, "read"): return RawDeserializer.deserialize_from_text(raw_data, content_type) # type: ignore return raw_data @@ -1699,7 +1694,7 @@ def deserialize_object(self, attr, **kwargs): if isinstance(attr, ET.Element): # Do no recurse on XML, just return the tree as-is return attr - if isinstance(attr, basestring): + if isinstance(attr, str): return self.deserialize_basic(attr, "str") obj_type = type(attr) if obj_type in self.basic_types: @@ -1756,7 +1751,7 @@ def deserialize_basic(self, attr, data_type): if data_type == "bool": if attr in [True, False, 1, 0]: return bool(attr) - elif isinstance(attr, basestring): + elif isinstance(attr, str): if attr.lower() in ["true", "1"]: return True elif attr.lower() in ["false", "0"]: diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_client.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_client.py index 7362145ad1ab..8e6eb780fc7f 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_client.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_client.py @@ -7,7 +7,8 @@ # -------------------------------------------------------------------------- from copy import deepcopy -from typing import Any, Awaitable +from typing import Any, Awaitable, TYPE_CHECKING, Union +from typing_extensions import Self from azure.core import AsyncPipelineClient from azure.core.credentials import AzureKeyCredential @@ -18,6 +19,10 @@ from ._configuration import ImageAnalysisClientConfiguration from ._operations import ImageAnalysisClientOperationsMixin +if TYPE_CHECKING: + # pylint: disable=unused-import,ungrouped-imports + from azure.core.credentials_async import AsyncTokenCredential + class ImageAnalysisClient(ImageAnalysisClientOperationsMixin): # pylint: disable=client-accepts-api-version-keyword """ImageAnalysisClient. @@ -25,14 +30,18 @@ class ImageAnalysisClient(ImageAnalysisClientOperationsMixin): # pylint: disabl :param endpoint: Azure AI Computer Vision endpoint (protocol and hostname, for example: https://:code:``.cognitiveservices.azure.com). Required. :type endpoint: str - :param credential: Credential needed for the client to connect to Azure. Required. - :type credential: ~azure.core.credentials.AzureKeyCredential + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials_async.AsyncTokenCredential :keyword api_version: The API version to use for this operation. Default value is "2023-10-01". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str """ - def __init__(self, endpoint: str, credential: AzureKeyCredential, **kwargs: Any) -> None: + def __init__( + self, endpoint: str, credential: Union[AzureKeyCredential, "AsyncTokenCredential"], **kwargs: Any + ) -> None: _endpoint = "{endpoint}/computervision" self._config = ImageAnalysisClientConfiguration(endpoint=endpoint, credential=credential, **kwargs) _policies = kwargs.pop("policies", None) @@ -89,7 +98,7 @@ def send_request( async def close(self) -> None: await self._client.close() - async def __aenter__(self) -> "ImageAnalysisClient": + async def __aenter__(self) -> Self: await self._client.__aenter__() return self diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_configuration.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_configuration.py index 3df051fdae78..94e384b56cb5 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_configuration.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_configuration.py @@ -6,13 +6,17 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- -from typing import Any +from typing import Any, TYPE_CHECKING, Union from azure.core.credentials import AzureKeyCredential from azure.core.pipeline import policies from .._version import VERSION +if TYPE_CHECKING: + # pylint: disable=unused-import,ungrouped-imports + from azure.core.credentials_async import AsyncTokenCredential + class ImageAnalysisClientConfiguration: # pylint: disable=too-many-instance-attributes,name-too-long """Configuration for ImageAnalysisClient. @@ -23,14 +27,18 @@ class ImageAnalysisClientConfiguration: # pylint: disable=too-many-instance-att :param endpoint: Azure AI Computer Vision endpoint (protocol and hostname, for example: https://:code:``.cognitiveservices.azure.com). Required. :type endpoint: str - :param credential: Credential needed for the client to connect to Azure. Required. - :type credential: ~azure.core.credentials.AzureKeyCredential + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials_async.AsyncTokenCredential :keyword api_version: The API version to use for this operation. Default value is "2023-10-01". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str """ - def __init__(self, endpoint: str, credential: AzureKeyCredential, **kwargs: Any) -> None: + def __init__( + self, endpoint: str, credential: Union[AzureKeyCredential, "AsyncTokenCredential"], **kwargs: Any + ) -> None: api_version: str = kwargs.pop("api_version", "2023-10-01") if endpoint is None: @@ -41,10 +49,18 @@ def __init__(self, endpoint: str, credential: AzureKeyCredential, **kwargs: Any) self.endpoint = endpoint self.credential = credential self.api_version = api_version + self.credential_scopes = kwargs.pop("credential_scopes", ["https://cognitiveservices.azure.com/.default"]) kwargs.setdefault("sdk_moniker", "ai-vision-imageanalysis/{}".format(VERSION)) self.polling_interval = kwargs.get("polling_interval", 30) self._configure(**kwargs) + def _infer_policy(self, **kwargs): + if isinstance(self.credential, AzureKeyCredential): + return policies.AzureKeyCredentialPolicy(self.credential, "Ocp-Apim-Subscription-Key", **kwargs) + if hasattr(self.credential, "get_token"): + return policies.AsyncBearerTokenCredentialPolicy(self.credential, *self.credential_scopes, **kwargs) + raise TypeError(f"Unsupported credential: {self.credential}") + def _configure(self, **kwargs: Any) -> None: self.user_agent_policy = kwargs.get("user_agent_policy") or policies.UserAgentPolicy(**kwargs) self.headers_policy = kwargs.get("headers_policy") or policies.HeadersPolicy(**kwargs) @@ -56,6 +72,4 @@ def _configure(self, **kwargs: Any) -> None: self.retry_policy = kwargs.get("retry_policy") or policies.AsyncRetryPolicy(**kwargs) self.authentication_policy = kwargs.get("authentication_policy") if self.credential and not self.authentication_policy: - self.authentication_policy = policies.AzureKeyCredentialPolicy( - self.credential, "Ocp-Apim-Subscription-Key", **kwargs - ) + self.authentication_policy = self._infer_policy(**kwargs) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_operations/_operations.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_operations/_operations.py index 161b5411b449..3c679fc3beb0 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_operations/_operations.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_operations/_operations.py @@ -9,7 +9,7 @@ from io import IOBase import json import sys -from typing import Any, Callable, Dict, IO, List, Optional, TypeVar, Union, overload +from typing import Any, Callable, Dict, IO, List, Optional, Type, TypeVar, Union, overload from azure.core.exceptions import ( ClientAuthenticationError, @@ -42,10 +42,11 @@ class ImageAnalysisClientOperationsMixin(ImageAnalysisClientMixinABC): + @distributed_trace_async async def _analyze_from_image_data( self, - image_content: bytes, + image_data: bytes, *, visual_features: List[Union[str, _models.VisualFeatures]], language: Optional[str] = None, @@ -54,11 +55,10 @@ async def _analyze_from_image_data( model_version: Optional[str] = None, **kwargs: Any ) -> _models.ImageAnalysisResult: - # pylint: disable=line-too-long """Performs a single Image Analysis operation. - :param image_content: The image to be analyzed. Required. - :type image_content: bytes + :param image_data: The image to be analyzed. Required. + :type image_data: bytes :keyword visual_features: A list of visual features to analyze. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. @@ -99,33 +99,25 @@ async def _analyze_from_image_data( # response body for status code(s): 200 response == { "metadata": { - "height": 0, # The height of the image in pixels. Required. - "width": 0 # The width of the image in pixels. Required. + "height": 0, + "width": 0 }, - "modelVersion": "str", # The cloud AI model used for the analysis. Required. + "modelVersion": "str", "captionResult": { - "confidence": 0.0, # A score, in the range of 0 to 1 (inclusive), - representing the confidence that this description is accurate. Higher values - indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" }, "denseCaptionsResult": { "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this description is - accurate. Higher values indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" } ] }, @@ -133,23 +125,15 @@ async def _analyze_from_image_data( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, "tags": [ { - "confidence": 0.0, # A score, in the - range of 0 to 1 (inclusive), representing the confidence that - this entity was observed. Higher values indicating higher - confidence. Required. - "name": "str" # Name of the entity. - Required. + "confidence": 0.0, + "name": "str" } ] } @@ -159,18 +143,12 @@ async def _analyze_from_image_data( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0 # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this detection was - accurate. Higher values indicating higher confidence. Required. + "confidence": 0.0 } ] }, @@ -181,39 +159,23 @@ async def _analyze_from_image_data( { "boundingPolygon": [ { - "x": 0, # The - horizontal x-coordinate of this point, in pixels. - Zero values corresponds to the left-most pixels in - the image. Required. - "y": 0 # The - vertical y-coordinate of this point, in pixels. Zero - values corresponds to the top-most pixels in the - image. Required. + "x": 0, + "y": 0 } ], - "text": "str", # Text content of the - detected text line. Required. + "text": "str", "words": [ { "boundingPolygon": [ { "x": - 0, # The horizontal x-coordinate of this - point, in pixels. Zero values corresponds to - the left-most pixels in the image. Required. + 0, "y": - 0 # The vertical y-coordinate of this point, - in pixels. Zero values corresponds to the - top-most pixels in the image. Required. + 0 } ], - "confidence": 0.0, # - The level of confidence that the word was detected. - Confidence scores span the range of 0.0 to 1.0 - (inclusive), with higher values indicating a higher - confidence of detection. Required. - "text": "str" # Text - content of the word. Required. + "confidence": 0.0, + "text": "str" } ] } @@ -224,21 +186,12 @@ async def _analyze_from_image_data( "smartCropsResult": { "values": [ { - "aspectRatio": 0.0, # The aspect ratio of the crop - region. Aspect ratio is calculated by dividing the width of the - region in pixels by its height in pixels. The aspect ratio will be in - the range 0.75 to 1.8 (inclusive) if provided by the developer during - the analyze call. Otherwise, it will be in the range 0.5 to 2.0 - (inclusive). Required. + "aspectRatio": 0.0, "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 } } ] @@ -246,16 +199,14 @@ async def _analyze_from_image_data( "tagsResult": { "values": [ { - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this entity was - observed. Higher values indicating higher confidence. Required. - "name": "str" # Name of the entity. Required. + "confidence": 0.0, + "name": "str" } ] } } """ - error_map = { + error_map: MutableMapping[int, Type[HttpResponseError]] = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, @@ -269,7 +220,7 @@ async def _analyze_from_image_data( content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) cls: ClsType[_models.ImageAnalysisResult] = kwargs.pop("cls", None) - _content = image_content + _content = image_data _request = build_image_analysis_analyze_from_image_data_request( visual_features=visual_features, @@ -314,7 +265,7 @@ async def _analyze_from_image_data( @overload async def _analyze_from_url( # pylint: disable=protected-access self, - image_content: _models._models.ImageUrl, + image_url: _models._models.ImageUrl, *, visual_features: List[Union[str, _models.VisualFeatures]], content_type: str = "application/json", @@ -323,13 +274,11 @@ async def _analyze_from_url( # pylint: disable=protected-access smart_crops_aspect_ratios: Optional[List[float]] = None, model_version: Optional[str] = None, **kwargs: Any - ) -> _models.ImageAnalysisResult: - ... - + ) -> _models.ImageAnalysisResult: ... @overload async def _analyze_from_url( self, - image_content: JSON, + image_url: JSON, *, visual_features: List[Union[str, _models.VisualFeatures]], content_type: str = "application/json", @@ -338,13 +287,11 @@ async def _analyze_from_url( smart_crops_aspect_ratios: Optional[List[float]] = None, model_version: Optional[str] = None, **kwargs: Any - ) -> _models.ImageAnalysisResult: - ... - + ) -> _models.ImageAnalysisResult: ... @overload async def _analyze_from_url( self, - image_content: IO[bytes], + image_url: IO[bytes], *, visual_features: List[Union[str, _models.VisualFeatures]], content_type: str = "application/json", @@ -353,13 +300,12 @@ async def _analyze_from_url( smart_crops_aspect_ratios: Optional[List[float]] = None, model_version: Optional[str] = None, **kwargs: Any - ) -> _models.ImageAnalysisResult: - ... + ) -> _models.ImageAnalysisResult: ... @distributed_trace_async async def _analyze_from_url( self, - image_content: Union[_models._models.ImageUrl, JSON, IO[bytes]], + image_url: Union[_models._models.ImageUrl, JSON, IO[bytes]], *, visual_features: List[Union[str, _models.VisualFeatures]], language: Optional[str] = None, @@ -368,12 +314,11 @@ async def _analyze_from_url( model_version: Optional[str] = None, **kwargs: Any ) -> _models.ImageAnalysisResult: - # pylint: disable=line-too-long """Performs a single Image Analysis operation. - :param image_content: The image to be analyzed. Is one of the following types: ImageUrl, JSON, + :param image_url: The image to be analyzed. Is one of the following types: ImageUrl, JSON, IO[bytes] Required. - :type image_content: ~azure.ai.vision.imageanalysis.models.ImageUrl or JSON or IO[bytes] + :type image_url: ~azure.ai.vision.imageanalysis.models._models.ImageUrl or JSON or IO[bytes] :keyword visual_features: A list of visual features to analyze. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. @@ -412,40 +357,32 @@ async def _analyze_from_url( .. code-block:: python # JSON input template you can fill out and use as your body input. - image_content = { - "url": "str" # Publicly reachable URL of an image to analyze. Required. + image_url = { + "url": "str" } # response body for status code(s): 200 response == { "metadata": { - "height": 0, # The height of the image in pixels. Required. - "width": 0 # The width of the image in pixels. Required. + "height": 0, + "width": 0 }, - "modelVersion": "str", # The cloud AI model used for the analysis. Required. + "modelVersion": "str", "captionResult": { - "confidence": 0.0, # A score, in the range of 0 to 1 (inclusive), - representing the confidence that this description is accurate. Higher values - indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" }, "denseCaptionsResult": { "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this description is - accurate. Higher values indicating higher confidence. Required. - "text": "str" # The text of the caption. Required. + "confidence": 0.0, + "text": "str" } ] }, @@ -453,23 +390,15 @@ async def _analyze_from_url( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, "tags": [ { - "confidence": 0.0, # A score, in the - range of 0 to 1 (inclusive), representing the confidence that - this entity was observed. Higher values indicating higher - confidence. Required. - "name": "str" # Name of the entity. - Required. + "confidence": 0.0, + "name": "str" } ] } @@ -479,18 +408,12 @@ async def _analyze_from_url( "values": [ { "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 }, - "confidence": 0.0 # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this detection was - accurate. Higher values indicating higher confidence. Required. + "confidence": 0.0 } ] }, @@ -501,39 +424,23 @@ async def _analyze_from_url( { "boundingPolygon": [ { - "x": 0, # The - horizontal x-coordinate of this point, in pixels. - Zero values corresponds to the left-most pixels in - the image. Required. - "y": 0 # The - vertical y-coordinate of this point, in pixels. Zero - values corresponds to the top-most pixels in the - image. Required. + "x": 0, + "y": 0 } ], - "text": "str", # Text content of the - detected text line. Required. + "text": "str", "words": [ { "boundingPolygon": [ { "x": - 0, # The horizontal x-coordinate of this - point, in pixels. Zero values corresponds to - the left-most pixels in the image. Required. + 0, "y": - 0 # The vertical y-coordinate of this point, - in pixels. Zero values corresponds to the - top-most pixels in the image. Required. + 0 } ], - "confidence": 0.0, # - The level of confidence that the word was detected. - Confidence scores span the range of 0.0 to 1.0 - (inclusive), with higher values indicating a higher - confidence of detection. Required. - "text": "str" # Text - content of the word. Required. + "confidence": 0.0, + "text": "str" } ] } @@ -544,21 +451,12 @@ async def _analyze_from_url( "smartCropsResult": { "values": [ { - "aspectRatio": 0.0, # The aspect ratio of the crop - region. Aspect ratio is calculated by dividing the width of the - region in pixels by its height in pixels. The aspect ratio will be in - the range 0.75 to 1.8 (inclusive) if provided by the developer during - the analyze call. Otherwise, it will be in the range 0.5 to 2.0 - (inclusive). Required. + "aspectRatio": 0.0, "boundingBox": { - "h": 0, # Height of the area, in pixels. - Required. - "w": 0, # Width of the area, in pixels. - Required. - "x": 0, # X-coordinate of the top left point - of the area, in pixels. Required. - "y": 0 # Y-coordinate of the top left point - of the area, in pixels. Required. + "h": 0, + "w": 0, + "x": 0, + "y": 0 } } ] @@ -566,16 +464,14 @@ async def _analyze_from_url( "tagsResult": { "values": [ { - "confidence": 0.0, # A score, in the range of 0 to 1 - (inclusive), representing the confidence that this entity was - observed. Higher values indicating higher confidence. Required. - "name": "str" # Name of the entity. Required. + "confidence": 0.0, + "name": "str" } ] } } """ - error_map = { + error_map: MutableMapping[int, Type[HttpResponseError]] = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, @@ -591,10 +487,10 @@ async def _analyze_from_url( content_type = content_type or "application/json" _content = None - if isinstance(image_content, (IOBase, bytes)): - _content = image_content + if isinstance(image_url, (IOBase, bytes)): + _content = image_url else: - _content = json.dumps(image_content, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + _content = json.dumps(image_url, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore _request = build_image_analysis_analyze_from_url_request( visual_features=visual_features, diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_patch.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_patch.py index 117a0e6c4ebb..4b6076ad0f72 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_patch.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/aio/_patch.py @@ -20,8 +20,10 @@ class ImageAnalysisClient(ImageAnalysisClientGenerated): :param endpoint: Azure AI Computer Vision endpoint (protocol and hostname, for example: https://:code:``.cognitiveservices.azure.com). Required. :type endpoint: str - :param credential: Credential needed for the client to connect to Azure. Required. - :type credential: ~azure.core.credentials.AzureKeyCredential + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials.TokenCredential :keyword api_version: The API version to use for this operation. Default value is "2023-10-01". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str @@ -71,14 +73,14 @@ async def analyze_from_url( :paramtype model_version: str :return: ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping :rtype: ~azure.ai.vision.imageanalysis.models.ImageAnalysisResult - :raises: ~azure.core.exceptions.HttpResponseError + :raises ~azure.core.exceptions.HttpResponseError: """ visual_features_impl: List[Union[str, _models.VisualFeatures]] = list(visual_features) return await ImageAnalysisClientOperationsMixin._analyze_from_url( # pylint: disable=protected-access self, - image_content=_models._models.ImageUrl(url=image_url), # pylint: disable=protected-access + image_url=_models._models.ImageUrl(url=image_url), # pylint: disable=protected-access visual_features=visual_features_impl, language=language, gender_neutral_caption=gender_neutral_caption, @@ -87,7 +89,6 @@ async def analyze_from_url( **kwargs ) - @distributed_trace_async async def analyze( self, @@ -132,14 +133,14 @@ async def analyze( :paramtype model_version: str :return: ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping :rtype: ~azure.ai.vision.imageanalysis.models.ImageAnalysisResult - :raises: ~azure.core.exceptions.HttpResponseError + :raises ~azure.core.exceptions.HttpResponseError: """ visual_features_impl: List[Union[str, _models.VisualFeatures]] = list(visual_features) return await ImageAnalysisClientOperationsMixin._analyze_from_image_data( # pylint: disable=protected-access self, - image_content=image_data, + image_data=image_data, visual_features=visual_features_impl, language=language, gender_neutral_caption=gender_neutral_caption, diff --git a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/models/_models.py b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/models/_models.py index a7288c2fbe6f..f4ff4c276926 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/models/_models.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis/models/_models.py @@ -20,7 +20,6 @@ class CaptionResult(_model_base.Model): """Represents a generated phrase that describes the content of the whole image. - All required parameters must be populated in order to send to server. :ivar confidence: A score, in the range of 0 to 1 (inclusive), representing the confidence that this description is accurate. @@ -43,8 +42,7 @@ def __init__( *, confidence: float, text: str, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -62,7 +60,6 @@ class CropRegion(_model_base.Model): The region preserves as much content as possible from the analyzed image, with priority given to detected faces. - All required parameters must be populated in order to send to server. :ivar aspect_ratio: The aspect ratio of the crop region. Aspect ratio is calculated by dividing the width of the region in pixels by its height in @@ -91,8 +88,7 @@ def __init__( *, aspect_ratio: float, bounding_box: "_models.ImageBoundingBox", - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -109,7 +105,6 @@ class DenseCaption(_model_base.Model): """Represents a generated phrase that describes the content of the whole image or a region in the image. - All required parameters must be populated in order to send to server. :ivar confidence: A score, in the range of 0 to 1 (inclusive), representing the confidence that this description is accurate. @@ -137,8 +132,7 @@ def __init__( confidence: float, text: str, bounding_box: "_models.ImageBoundingBox", - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -155,7 +149,6 @@ class DenseCaptionsResult(_model_base.Model): """Represents a list of up to 10 image captions for different regions of the image. The first caption always applies to the whole image. - All required parameters must be populated in order to send to server. :ivar list: The list of image captions. Required. :vartype list: list[~azure.ai.vision.imageanalysis.models.DenseCaption] @@ -169,8 +162,7 @@ def __init__( self, *, list: List["_models.DenseCaption"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -186,7 +178,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class DetectedObject(_model_base.Model): """Represents a physical object detected in an image. - All required parameters must be populated in order to send to server. :ivar bounding_box: A rectangular boundary where the object was detected. Required. :vartype bounding_box: ~azure.ai.vision.imageanalysis.models.ImageBoundingBox @@ -205,8 +196,7 @@ def __init__( *, bounding_box: "_models.ImageBoundingBox", tags: List["_models.DetectedTag"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -224,7 +214,6 @@ class DetectedPerson(_model_base.Model): Readonly variables are only populated by the server, and will be ignored when sending a request. - All required parameters must be populated in order to send to server. :ivar bounding_box: A rectangular boundary where the person was detected. Required. :vartype bounding_box: ~azure.ai.vision.imageanalysis.models.ImageBoundingBox @@ -247,7 +236,6 @@ class DetectedTag(_model_base.Model): scenery, or action that appear in the image. - All required parameters must be populated in order to send to server. :ivar confidence: A score, in the range of 0 to 1 (inclusive), representing the confidence that this entity was observed. @@ -270,8 +258,7 @@ def __init__( *, confidence: float, name: str, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -287,7 +274,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class DetectedTextBlock(_model_base.Model): """Represents a single block of detected text in the image. - All required parameters must be populated in order to send to server. :ivar lines: A list of text lines in this block. Required. :vartype lines: list[~azure.ai.vision.imageanalysis.models.DetectedTextLine] @@ -301,8 +287,7 @@ def __init__( self, *, lines: List["_models.DetectedTextLine"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -318,7 +303,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class DetectedTextLine(_model_base.Model): """Represents a single line of text in the image. - All required parameters must be populated in order to send to server. :ivar text: Text content of the detected text line. Required. :vartype text: str @@ -344,8 +328,7 @@ def __init__( text: str, bounding_polygon: List["_models.ImagePoint"], words: List["_models.DetectedTextWord"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -363,7 +346,6 @@ class DetectedTextWord(_model_base.Model): languages, such as Chinese, Japanese, and Korean, each character is represented as its own word. - All required parameters must be populated in order to send to server. :ivar text: Text content of the word. Required. :vartype text: str @@ -392,8 +374,7 @@ def __init__( text: str, bounding_polygon: List["_models.ImagePoint"], confidence: float, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -409,7 +390,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class ImageAnalysisResult(_model_base.Model): """Represents the outcome of an Image Analysis operation. - All required parameters must be populated in order to send to server. :ivar caption: The generated phrase that describes the content of the analyzed image. :vartype caption: ~azure.ai.vision.imageanalysis.models.CaptionResult @@ -473,8 +453,7 @@ def __init__( read: Optional["_models.ReadResult"] = None, smart_crops: Optional["_models.SmartCropsResult"] = None, tags: Optional["_models.TagsResult"] = None, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -490,7 +469,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class ImageBoundingBox(_model_base.Model): """A basic rectangle specifying a sub-region of the image. - All required parameters must be populated in order to send to server. :ivar x: X-coordinate of the top left point of the area, in pixels. Required. :vartype x: int @@ -519,8 +497,7 @@ def __init__( y: int, width: int, height: int, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -536,7 +513,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class ImageMetadata(_model_base.Model): """Metadata associated with the analyzed image. - All required parameters must be populated in order to send to server. :ivar height: The height of the image in pixels. Required. :vartype height: int @@ -555,8 +531,7 @@ def __init__( *, height: int, width: int, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -572,7 +547,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class ImagePoint(_model_base.Model): """Represents the coordinates of a single pixel in the image. - All required parameters must be populated in order to send to server. :ivar x: The horizontal x-coordinate of this point, in pixels. Zero values corresponds to the left-most pixels in the image. Required. @@ -595,8 +569,7 @@ def __init__( *, x: int, y: int, - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -625,7 +598,6 @@ class ImageUrl(_model_base.Model): class ObjectsResult(_model_base.Model): """Represents a list of physical object detected in an image and their location. - All required parameters must be populated in order to send to server. :ivar list: A list of physical object detected in an image and their location. Required. :vartype list: list[~azure.ai.vision.imageanalysis.models.DetectedObject] @@ -639,8 +611,7 @@ def __init__( self, *, list: List["_models.DetectedObject"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -656,7 +627,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class PeopleResult(_model_base.Model): """Represents a list of people detected in an image and their location. - All required parameters must be populated in order to send to server. :ivar list: A list of people detected in an image and their location. Required. :vartype list: list[~azure.ai.vision.imageanalysis.models.DetectedPerson] @@ -670,8 +640,7 @@ def __init__( self, *, list: List["_models.DetectedPerson"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -687,7 +656,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class ReadResult(_model_base.Model): """The results of a Read (OCR) operation. - All required parameters must be populated in order to send to server. :ivar blocks: A list of text blocks in the image. At the moment only one block is returned, containing all the text detected in the image. Required. @@ -703,8 +671,7 @@ def __init__( self, *, blocks: List["_models.DetectedTextBlock"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -723,7 +690,6 @@ class SmartCropsResult(_model_base.Model): These regions preserve as much content as possible from the analyzed image, with priority given to detected faces. - All required parameters must be populated in order to send to server. :ivar list: A list of crop regions. Required. :vartype list: list[~azure.ai.vision.imageanalysis.models.CropRegion] @@ -737,8 +703,7 @@ def __init__( self, *, list: List["_models.CropRegion"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): @@ -756,7 +721,6 @@ class TagsResult(_model_base.Model): or actions that appear in the image. - All required parameters must be populated in order to send to server. :ivar list: A list of tags. Required. :vartype list: list[~azure.ai.vision.imageanalysis.models.DetectedTag] @@ -770,8 +734,7 @@ def __init__( self, *, list: List["_models.DetectedTag"], - ): - ... + ): ... @overload def __init__(self, mapping: Mapping[str, Any]): diff --git a/sdk/vision/azure-ai-vision-imageanalysis/samples/README.md b/sdk/vision/azure-ai-vision-imageanalysis/samples/README.md index 8ae53f23a2e6..78da2bd3b5ac 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/samples/README.md +++ b/sdk/vision/azure-ai-vision-imageanalysis/samples/README.md @@ -10,19 +10,26 @@ urlFragment: image-analysis-samples # Samples for Image Analysis client library for Python -These are runnable console Python programs that show how to use the Image Analysis client library. They cover all the supported visual features. Most use the a synchronous client to analyze an image file or image URL. Two samples use the asynchronous client. The concepts are similar, you can easily modify any of the samples to your needs. +These are runnable console Python programs that show how to use the Image Analysis client library. + +- They cover all the supported visual features. +- Most use the synchronous client to analyze an image file or image URL. Three samples (located in the `async_samples` folder) use the asynchronous client. +- Most use API key authentication. Two samples (having `_entra_id_auth` in their name) use Entra ID authentication. + +The concepts are similar, you can easily modify any of the samples to your needs. ## Synchronous client samples |**File Name**|**Description**| |----------------|-------------| |[sample_analyze_all_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_analyze_all_image_file.py) | Extract all 7 visual features from an image file, using a synchronous client. Logging turned on.| -|[sample_caption_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file.py) and [sample_caption_image_url.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_url.py)| Generate a human-readable sentence that describes the content of an image file or image URL, using a synchronous client. | +|[sample_caption_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file.py) and [sample_caption_image_url.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_url.py)| Generate a human-readable sentence that describes the content of an image file or image URL, using a synchronous client.| +|[sample_caption_image_file_entra_id_auth.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file_entra_id_auth.py) | Generate a human-readable sentence that describes the content of an image file, using a synchronous client and Entra ID authentication.| |[sample_dense_captions_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_dense_captions_image_file.py) | Generating a human-readable caption for up to 10 different regions in the image, including one for the whole image, using a synchronous client.| -|[sample_ocr_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_ocr_image_file.py) and [sample_ocr_image_url.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_ocr_image_url.py)| Extract printed or handwritten text from an image file or image URL, using a synchronous client. | -|[sample_tags_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_tags_image_file.py) | Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in an image file, using a synchronous client. | +|[sample_ocr_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_ocr_image_file.py) and [sample_ocr_image_url.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_ocr_image_url.py)| Extract printed or handwritten text from an image file or image URL, using a synchronous client.| +|[sample_tags_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_tags_image_file.py) | Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in an image file, using a synchronous client.| |[sample_objects_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_objects_image_file.py) | Detect physical objects in an image file and return their location, using a synchronous client. | -|[sample_smart_crops_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_smart_crops_image_file.py) | Find a representative sub-region of the image for thumbnail generation, using a synchronous client .| +|[sample_smart_crops_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_smart_crops_image_file.py) | Find a representative sub-region of the image for thumbnail generation, using a synchronous client.| |[sample_people_image_file.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_people_image_file.py) | Locate people in the image and return their location, using a synchronous client.| ## Asynchronous client samples @@ -31,6 +38,8 @@ These are runnable console Python programs that show how to use the Image Analys |----------------|-------------| |[sample_caption_image_file_async.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_caption_image_file_async.py) | Generate a human-readable sentence that describes the content of an image file, using an asynchronous client. | |[sample_ocr_image_url_async.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_async.py) | Extract printed or handwritten text from an image URL, using an asynchronous client. | +|[sample_ocr_image_url_entra_id_auth_async.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_entra_id_auth_async.py) | Extract printed or handwritten text from an image URL, using an asynchronous client and Entra ID authentication | + ## Prerequisites See [Prerequisites](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/README.md#prerequisites) here. diff --git a/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_async.py b/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_async.py index 502d05d4417d..103b79406589 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_async.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_async.py @@ -31,7 +31,7 @@ import asyncio -async def sample_ocr_image_file_async(): +async def sample_ocr_image_url_async(): import os from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient from azure.ai.vision.imageanalysis.models import VisualFeatures @@ -74,7 +74,7 @@ async def sample_ocr_image_file_async(): async def main(): - await sample_ocr_image_file_async() + await sample_ocr_image_url_async() if __name__ == "__main__": diff --git a/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_entra_id_auth_async.py b/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_entra_id_auth_async.py new file mode 100644 index 000000000000..e3bc95c2197c --- /dev/null +++ b/sdk/vision/azure-ai-vision-imageanalysis/samples/async_samples/sample_ocr_image_url_entra_id_auth_async.py @@ -0,0 +1,79 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + This sample demonstrates how to extract printed or hand-written text from a + publicly accessible image URL, using an asynchronous client and Entra ID authentication. + + The asynchronous `analyze` method call returns an `ImageAnalysisResult` object. + Its `read` property (a `ReadResult` object) includes a list of `TextBlock` objects. Currently, the + list will always contain one element only, as the service does not yet support grouping text lines + into separate blocks. The `TextBlock` object contains a list of `DocumentLine` object. Each one includes: + - The text content of the line. + - A `BoundingPolygon` coordinates in pixels, for a polygon surrounding the line of text in the image. + - A list of `DocumentWord` objects. + Each `DocumentWord` object contains: + - The text content of the word. + - A `BoundingPolygon` coordinates in pixels, for a polygon surrounding the word in the image. + - A confidence score in the range [0, 1], with higher values indicating greater confidences in + the recognition of the word. + +USAGE: + python sample_ocr_image_url_entra_id_auth_async.py + + Set this environment variables before running the sample: + VISION_ENDPOINT - Your endpoint URL, in the form https://your-resource-name.cognitiveservices.azure.com + where `your-resource-name` is your unique Azure Computer Vision resource name. +""" +import asyncio + + +async def sample_ocr_image_url_entra_id_auth_async(): + import os + from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient + from azure.ai.vision.imageanalysis.models import VisualFeatures + from azure.identity.aio import DefaultAzureCredential + + # Set the value of your computer vision endpoint as environment variable: + try: + endpoint = os.environ["VISION_ENDPOINT"] + except KeyError: + print("Missing environment variable 'VISION_ENDPOINT'.") + print("Set it before running this sample.") + exit() + + # Create an asynchronous Image Analysis client + client = ImageAnalysisClient( + endpoint=endpoint, + credential=DefaultAzureCredential(exclude_interactive_browser_credential=False), + ) + + # Extract text (OCR) from an image URL, asynchronously. + result = await client.analyze_from_url( + image_url="https://aka.ms/azsdk/image-analysis/sample.jpg", + visual_features=[VisualFeatures.READ] + ) + + await client.close() + + # Print text (OCR) analysis results to the console + print("Image analysis results:") + print(" Read:") + if result.read is not None: + for line in result.read.blocks[0].lines: + print(f" Line: '{line.text}', Bounding box {line.bounding_polygon}") + for word in line.words: + print(f" Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}") + print(f" Image height: {result.metadata.height}") + print(f" Image width: {result.metadata.width}") + print(f" Model version: {result.model_version}") + + +async def main(): + await sample_ocr_image_url_entra_id_auth_async() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file.py b/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file.py index 5570b2b4996f..d5fd0a12fc6d 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file.py @@ -44,7 +44,8 @@ def sample_caption_image_file(): print("Set them before running this sample.") exit() - # Create an Image Analysis client for synchronous operations + # Create an Image Analysis client for synchronous operations, + # using API key authentication client = ImageAnalysisClient( endpoint=endpoint, credential=AzureKeyCredential(key) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file_entra_id_auth.py b/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file_entra_id_auth.py new file mode 100644 index 000000000000..e4fd3387e947 --- /dev/null +++ b/sdk/vision/azure-ai-vision-imageanalysis/samples/sample_caption_image_file_entra_id_auth.py @@ -0,0 +1,74 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +DESCRIPTION: + This sample demonstrates how to generate a human-readable sentence that describes the content + of the image file sample.jpg, using a synchronous client. It uses Entra ID authentication. + + By default the caption may contain gender terms such as "man", "woman", or "boy", "girl". + You have the option to request gender-neutral terms such as "person" or "child" by setting + `gender_neutral_caption = True` when calling `analyze`, as shown in this example. + + The synchronous (blocking) `analyze` method call returns an `ImageAnalysisResult` object. + Its `caption` property (a `CaptionResult` object) contains: + - The text of the caption. Captions are only supported in English at the moment. + - A confidence score in the range [0, 1], with higher values indicating greater confidences in + the caption. + +USAGE: + python sample_caption_image_file_entra_id_auth.py + + Set this environment variables before running the sample: + VISION_ENDPOINT - Your endpoint URL, in the form https://your-resource-name.cognitiveservices.azure.com + where `your-resource-name` is your unique Azure Computer Vision resource name. +""" + + +def sample_caption_image_file_entra_id_auth(): + # [START create_client] + import os + from azure.ai.vision.imageanalysis import ImageAnalysisClient + from azure.ai.vision.imageanalysis.models import VisualFeatures + from azure.identity import DefaultAzureCredential + + # Set the value of your computer vision endpoint as environment variable: + try: + endpoint = os.environ["VISION_ENDPOINT"] + except KeyError: + print("Missing environment variable 'VISION_ENDPOINT'.") + print("Set it before running this sample.") + exit() + + # Create an Image Analysis client for synchronous operations, + # using Entra ID authentication + client = ImageAnalysisClient( + endpoint=endpoint, + credential=DefaultAzureCredential(exclude_interactive_browser_credential=False), + ) + # [END create_client] + + # Load image to analyze into a 'bytes' object + with open("sample.jpg", "rb") as f: + image_data = f.read() + + # Get a caption for the image. This will be a synchronously (blocking) call. + result = client.analyze( + image_data=image_data, + visual_features=[VisualFeatures.CAPTION], + gender_neutral_caption=True, # Optional (default is False) + ) + + # Print caption results to the console + print("Image analysis results:") + print(" Caption:") + if result.caption is not None: + print(f" '{result.caption.text}', Confidence {result.caption.confidence:.4f}") + print(f" Image height: {result.metadata.height}") + print(f" Image width: {result.metadata.width}") + print(f" Model version: {result.model_version}") + + +if __name__ == "__main__": + sample_caption_image_file_entra_id_auth() diff --git a/sdk/vision/azure-ai-vision-imageanalysis/setup.py b/sdk/vision/azure-ai-vision-imageanalysis/setup.py index b8b29a5bb973..5d8ef050c3db 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/setup.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/setup.py @@ -64,8 +64,9 @@ "azure.ai.vision.imageanalysis": ["py.typed"], }, install_requires=[ - "isodate<1.0.0,>=0.6.1", - "azure-core<2.0.0,>=1.30.0", + "isodate>=0.6.1", + "azure-core>=1.30.0", + "typing-extensions>=4.6.0", ], python_requires=">=3.8", ) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/tests/README.md b/sdk/vision/azure-ai-vision-imageanalysis/tests/README.md index bc4df8af54ad..5db5e62c0385 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/tests/README.md +++ b/sdk/vision/azure-ai-vision-imageanalysis/tests/README.md @@ -26,22 +26,27 @@ See [Prerequisites](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ pip install dist\azure_ai_vision_imageanalysis-1.0.0b1-py3-none-any.whl --user --force-reinstall ``` - ### Set environment variables See [Set environment variables](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/vision/azure-ai-vision-imageanalysis/README.md#set-environment-variables). In addition, the following environment values **must be** defined, although not used. Assign any value to them: -``` + +```cmd set VISION_TENANT_ID=not-used set VISION_CLIENT_ID=not-used set VISION_CLIENT_SECRET=not-used ``` +### Log in to Azure + +Install the Azure CLI and run `az login`, so tests that use Entra ID authentication can pass. + ### Configure test proxy Configure the test proxy to run live service tests without recordings: -``` + +```cmd set AZURE_TEST_RUN_LIVE=true set AZURE_SKIP_LIVE_RECORDING=true ``` @@ -49,7 +54,8 @@ set AZURE_SKIP_LIVE_RECORDING=true ### Run tests To run all tests, type: -``` + +```cmd pytest ``` diff --git a/sdk/vision/azure-ai-vision-imageanalysis/tests/conftest.py b/sdk/vision/azure-ai-vision-imageanalysis/tests/conftest.py index 91e541e4d1bd..d944cdf86007 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/tests/conftest.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/tests/conftest.py @@ -4,9 +4,17 @@ # ------------------------------------ import pytest -from devtools_testutils import test_proxy +from devtools_testutils import test_proxy, remove_batch_sanitizers + # autouse=True will trigger this fixture on each pytest run, even if it's not explicitly used by a test method @pytest.fixture(scope="session", autouse=True) def start_proxy(test_proxy): return + + +@pytest.fixture(scope="session", autouse=True) +def add_sanitizers(test_proxy): + # Remove the following sanitizers since certain fields are needed in tests and are non-sensitive: + # - AZSDK3493: $..name + remove_batch_sanitizers(["AZSDK3493"]) diff --git a/sdk/vision/azure-ai-vision-imageanalysis/tests/image_analysis_test_base.py b/sdk/vision/azure-ai-vision-imageanalysis/tests/image_analysis_test_base.py index f91c45ebd1e3..89a3b33e4d8a 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/tests/image_analysis_test_base.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/tests/image_analysis_test_base.py @@ -11,7 +11,7 @@ from os import path from typing import List, Optional, Union from devtools_testutils import AzureRecordedTestCase, EnvironmentVariableLoader -from azure.core.credentials import AzureKeyCredential +from azure.core.credentials import AzureKeyCredential, TokenCredential from azure.core.exceptions import AzureError from azure.core.pipeline import PipelineRequest @@ -53,15 +53,26 @@ class ImageAnalysisTestBase(AzureRecordedTestCase): def _create_client_for_standard_analysis(self, sync: bool, get_connection_url: bool = False, **kwargs): endpoint = kwargs.pop("vision_endpoint") key = kwargs.pop("vision_key") - self._create_client(endpoint, key, sync, get_connection_url) + credential = AzureKeyCredential(key) + self._create_client(endpoint, credential, sync, get_connection_url) + + def _create_client_for_standard_analysis_with_entra_id_auth(self, sync: bool, get_connection_url: bool = False, **kwargs): + endpoint = kwargs.pop("vision_endpoint") + # See /tools/azure-sdk-tools/devtools_testutils/azure_recorded_testcase.py for `get_credential` + if sync: + credential = self.get_credential(sdk.ImageAnalysisClient, is_async=False) + else: + credential = self.get_credential(async_sdk.ImageAnalysisClient, is_async=True) + self._create_client(endpoint, credential, sync, get_connection_url) def _create_client_for_authentication_failure(self, sync: bool, **kwargs): endpoint = kwargs.pop("vision_endpoint") key = "00000000000000000000000000000000" - self._create_client(endpoint, key, sync, False) - - def _create_client(self, endpoint: str, key: str, sync: bool, get_connection_url: bool): credential = AzureKeyCredential(key) + self._create_client(endpoint, credential, sync, False) + + def _create_client(self, endpoint: str, credential: Union[AzureKeyCredential, TokenCredential], sync: bool, get_connection_url: bool): + if sync: self.client = sdk.ImageAnalysisClient( endpoint=endpoint, diff --git a/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_async_client.py b/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_async_client.py index d73d5f4d921d..3bfceee97975 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_async_client.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_async_client.py @@ -8,6 +8,7 @@ from image_analysis_test_base import ImageAnalysisTestBase, ServicePreparer from devtools_testutils.aio import recorded_by_proxy_async + # The test class name needs to start with "Test" to get collected by pytest class TestImageAnalysisAsyncClient(ImageAnalysisTestBase): @@ -71,6 +72,17 @@ async def test_analyze_async_single_feature_from_url(self, **kwargs): await self.async_client.close() + # Test a single visual feature from an image url, using Entra ID authentication + @ServicePreparer() + @recorded_by_proxy_async + async def test_analyze_async_single_feature_from_file_entra_id_auth(self, **kwargs): + + self._create_client_for_standard_analysis_with_entra_id_auth(sync=False, **kwargs) + + await self._do_async_analysis(image_source=self.IMAGE_FILE,visual_features=[sdk.models.VisualFeatures.SMART_CROPS], **kwargs) + + await self.async_client.close() + # ********************************************************************************** # # ERROR TESTS diff --git a/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_client.py b/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_client.py index 1b467c7226a9..508be2b23f9f 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_client.py +++ b/sdk/vision/azure-ai-vision-imageanalysis/tests/test_image_analysis_client.py @@ -65,6 +65,17 @@ def test_analyze_sync_single_feature_from_file(self, **kwargs): self.client.close() + # Test a single visual feature from an image url, using Entra ID authentication + @ServicePreparer() + @recorded_by_proxy + def test_analyze_sync_single_feature_from_url_entra_id_auth(self, **kwargs): + + self._create_client_for_standard_analysis_with_entra_id_auth(sync=True, **kwargs) + + self._do_analysis(image_source=self.IMAGE_URL,visual_features=[sdk.models.VisualFeatures.OBJECTS], **kwargs) + + self.client.close() + # ********************************************************************************** # # ERROR TESTS diff --git a/sdk/vision/azure-ai-vision-imageanalysis/tsp-location.yaml b/sdk/vision/azure-ai-vision-imageanalysis/tsp-location.yaml index f3498ff59bfe..ad6a1e1e28d9 100644 --- a/sdk/vision/azure-ai-vision-imageanalysis/tsp-location.yaml +++ b/sdk/vision/azure-ai-vision-imageanalysis/tsp-location.yaml @@ -1,6 +1,6 @@ additionalDirectories: [] repo: Azure/azure-rest-api-specs directory: specification/ai/ImageAnalysis -commit: 3cf7400ba3d65978bef86e9c4197a5e7f7bf5277 +commit: 3bcdb0ee47bfa6bcc79036b4c9e5fe287701f796