-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarifications on processing expected_origins and processing steps #224
Comments
Noting that for JS/Web compatibility, the following are all technically legal and would be handled by the WebIDL conversion (which automatically converts everything to a USVString): const expected_origins = [
{ toString() { return "https://foo.com"} },
new URL("https://foo.com"),
"https://foo.com"
];
await navigator.identity.get({digital: {
providers: [{expected_origins, /*... other props */}]}
}); |
Not sure if we need to define the actual algorithm, but the text should be more normative and enforce those things listed by @marcoscaceres above, e.g.:
|
If the DC API spec is going to require request validation then perhaps the OpenID4VP spec needs to be clear about which validation rules are appropriate to apply at the transport (browser/OS) layer, vs. which are strictly for a wallet to enforce? In particular, since browsers and OSes may take some time to update, we need to make sure they're not accidentally blocking the deployment of extensions or other improvements to OpenID4VP which wallets and RPs are looking to support. |
On the browser side, we prefer algorithms because they are easier to read, test, and code against. English is a bit of crappy language for specs unfortunately, so short little statements works best for precision. Would folks be interested if we put together a pull request for this as a start? I'm new here, so not exactly how this works 🐣... @Sakurann or @tlodderstedt or @tplooker could you guide me? 🙈🙏 |
@danielfett, just so you can see what I mean, this is from the rough validator I'm working on for the DC API in WebKit to "process expected origins" (it's totally half-baked 🧑🍳 as I don't have a spec 🫠... but hopefully you get the idea of where I'm going with it): String OpenID4VPRequest::processExpectedOrigins(Document& document)
{
Vector<String> normalizedOrigins;
for (auto& origin : expected_origins) {
if (origin.isEmpty())
return "Expected origin must not be empty."_s;
auto url = SecurityOrigin::create(document.completeURL(origin));
if (!url.isValid())
return "Expected origin is not a valid URL."_s;
if (!url.protocolIs("https"))
return "Expected origin must be an HTTPS URL."_s;
normalizedOrigins.append(url->toString());
}
return {};
} For folks interested, here's the validator I'm working on that runs when the DC API is called on the browser. It runs immediately after javascript calls What's super cool is that each structure there is self validating, applying whatever rules we add to the spec ✨. |
The spec says:
However, the spec seems to be missing details of how
expected_origins
is to be processed.What should happen when:
expected_origins
is emptyexpected_origins
is empty.expected_origins
is an empty array, should the implementation treat this as allowing any origin or blocking all origins?expected_origins
contains an invalid origin or URLsexpected_origins
contains entries like["1://2", "https://foo.com"]
, should the presence of1://2
cause the whole list to be rejected, or should it just ignore1://2
and drop it on the floor?expected_origins
contains a non-https originexpected_origins
includes["gopher://foo:123"]
, should this entry be accepted or rejected?expected_origins
contains a relative or absolute path, or an empty stringexpected_origins
includes relative or absolute paths like[".", "/"]
or an empty string like[""]
, should these be resolved against a base URL or rejected?expected_origins
contains URLs with paths, query, and/or fragmentsexpected_origins
contains URLs such as["https://foo.com/path?query=1#fragment"]
, should these be normalized to just the origin or be rejected?expected_origins
contains duplicatesexpected_origins
includes["https://foo.com", "https://foo.com"]
, should the duplicates be removed or should this result in an error?expected_origins
contains non-string entriesexpected_origins
be converted to strings? By implication,expected_origins
should be defined as a WebIDLUSVString
to ensure proper handling of Unicode and type conversions.expected_origins
includes entries like["https://foo.com", null, "https://bar.com"]
, should non-string entries be converted to strings or cause an error?Explanation: Defining
expected_origins
as a WebIDLUSVString
ensures that all entries are properly handled as strings, accommodating Unicode characters correctly. This is important because URLs can contain Unicode characters, and usingUSVString
helps to normalize these characters to their Unicode Scalar Value (USV) form, ensuring consistency and preventing issues related to encoding and type mismatches.Handling of Localhost and Intranet URLs
localhost
or intranet addresses be treated with special consideration, especially regarding security implications?expected_origins
contains["https://localhost"]
or["https://192.168.1.1"]
, should these be allowed given they are local resources?Normalization of URLs
expected_origins
includes["https://foo.com/path?query=1#fragment"]
, should it be normalized to["https://foo.com"]
before comparison?Recommendation: Each URL should be parsed using the WHATWG URL parser, and only the origin should be added to a set to avoid duplicates. This ensures that the origins are consistently and correctly normalized, enhancing security and preventing ambiguity.
To ensure proper handling of
expected_origins
, we recommend defining it as follows in WebIDL:Further, the spec is missing validation step. Here's a start:
Validator and Processor for
OpenID4VPRequest
This section defines an algorithm to validate and process the
expected_origins
array in anOpenID4VPRequest
dictionary. The output will be a set of normalized origins, safely discarding query, fragments, paths, etc.Algorithm
Input:
OpenID4VPRequest
dictionary |request|, containing:expected_origins
: A sequence ofUSVString
.Output:
Steps:
Check if
expected_origins
is empty:expected_origins
is empty, throw aTypeError
indicating thatexpected_origins
must not be empty.Let
origins_set
be an empty set.For each
origin
inexpected_origins
:Parse the URL:
url
be the result of parsingorigin
using the URL parser withnull
as the URL base.origin
fails, throw aTypeError
indicating an invalid URL.Check scheme:
url
's scheme is not "https", throw aTypeError
indicating an unsupported URL scheme.localhost
and intranet addresses be allowed? (e.g.,https://localhost
orhttps://192.168.1.1
). Probably not.Extract origin:
normalized_origin
be the result of running the origin serialization steps onurl
.Check for duplicates:
origins_set
containsnormalized_origin
, continue.Add to set:
normalized_origin
toorigins_set
.Check if
origins_set
is empty:origins_set
is empty, throw aTypeError
indicating that no valid origins were found.Return the set:
origins_set
.The text was updated successfully, but these errors were encountered: