Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syntax is too complex #1

Open
gossi opened this issue Jul 20, 2023 · 3 comments
Open

Syntax is too complex #1

gossi opened this issue Jul 20, 2023 · 3 comments

Comments

@gossi
Copy link

gossi commented Jul 20, 2023

Hey @ddamato,

thank you for taking a test-drive here. I've pioneered operations/transforms (as style-dictionary is calling them) with Theemo. As I'm currently reworking theemo to support variables I also need to touch this. Of course I looked at your post, but also I wrote mine at the same time. We aren't that far off. So I'm aiming to suggest a combination of the both formats to make it a bit more readable on the syntax.

Problem: Using the array for two sort of types (primtive as value + array as function) is overloading. Then it becomes looking like a regex with $0 and $1.

Solution: Function programming, more exactly: piping + currying (= parameters), see the great video from Scott Wlaschin.

The syntax would change to:

{
  "token-name": {
    "$type": "color",
    "$value": "blue",
    "$operations": [
      ["opacity", 25], // function name "opacity" with parameter "25"
      ["hue", -25], // function name "hue" rotate by -25 degree
    ]
  }
}

The input to the opacity function is $value the result of that function is put into the hue function, the result of that is the final value.

Going a bit more complex with references and other tokens as input:

{
  "opacity": {
    "$type": "number", // huh? Is that a type?
    "$value": 25
  },
  "token-name": {
    "$type": "color",
    "$value": "blue",
    "$operations": [
      ["opacity", "{opacity}"], // function name "opacity" with parameter being a reference token
    ]
  }
}

Let's say I do have a opacity25 function (that always add 25% of opacity):

{
  "token-name": {
    "$type": "color",
    "$value": "blue",
    "$operations": [
      "opacity25"
    ]
  }
}

That is the operations is an array of a string value (referring to a function) or a tuple with [functionName, parameters]. Paramters can be a complex object:

{
  "token-name": {
    "$type": "color",
    "$value": "blue",
    "$operations": [
      ["leonardo", { ... }]
    ]
  }
}

Referring to operations as function names, gives them readability, as I know what hue or opacity means, but I've almost never seen an algorithmic representation in json that was readable.


About function names:

At the moment, I consider them need to be registered from the tools processing the tokens, such that style dictionary does that. We cannot even assume that the tooling is always using javascript (and therefore the node ecosystem).

Possibly: These function names are actually forward translatable into CSS functions: Which when you do a dev export of tokens (ie. even color palettes), then you can create this fluent theme generator (in a sandbox, not for production).

@ddamato-godaddy
Copy link

Thanks for the feedback @gossi! I mention the reason why I avoid this in the DTCG issue.

The problem I found with this approach is that this requires the spec to define what alpha means. Also, any new special keys will need to be introduced, reviewed, and agreed before we'd potentially see platforms support them.

In this way, the method in which opacity is computed could be different across tools. And it is also possible that a tool doesn't have opacity defined yet. Including the low-level commands that have been predefined by the JavaScript spec, with a limited few custom ones, helps reduces the responsibility of the specification. Other tooling can recreate the necessary commands right from the JavaScript spec. From here, as long as the tool follows the operations part of the spec, you can do anything!

In reality I don't expect folks to be continuously recreating operation sets because I recognize how complex the syntax can get. This is why it was critical to create a sharable, plugin-like ecosystem for importing premade ones.

{
  "some-token": {
    "$type": "color",
    "$value": "#fffc00",
    "$operations": [
      ["Import.operations", "./my-ops/opacity-operation.json", "$value", 0.5]
    ]
  }
}

@gossi
Copy link
Author

gossi commented Jul 20, 2023

What you are saying accounts for any string being used in here, even Math.max or String.repeat (the latter which doesn't event exist in js, btw).

So, instead of having something named alpha, you have something named Math.max - but the problem still exists. The tools executing the tokens file need to find the implementation of that particular named function. Think of tokens files being processed by rust or go.

But my focus was actually about the execution order of operations. I'm not a mathematician, but this should sit on a stable foundation - in programming very related is functional programming style, based very much on math here, so there is a lot to learn from (see video I liked above from Scott Wlaschin), where you wanna:

  1. pipe() through functions, making the output of one, the input of the next one and
  2. curry them to only have functions that take one argument

.. for which the main input is given as $value and then they can be emitted from your array.

Similar art of this is webpack, rollup or vite plugins/config files. Though the config files are written in js, they can import the function directly. That sort of mapping is what needs to be done in the spec for tokens.

@ddamato
Copy link
Owner

ddamato commented Jul 20, 2023

What you are saying accounts for any string being used in here, even Math.max or String.repeat (the latter which doesn't event exist in js, btw).

String.prototype.repeat()?

So, instead of having something named alpha, you have something named Math.max - but the problem still exists. The tools executing the tokens file need to find the implementation of that particular named function. Think of tokens files being processed by rust or go.

Rust: a.iter().max().unwrap()
Go: Arrays.stream(a).max().getAsInt()

Yes, a conscious decision because the result of Math.max can be replicated without contention, while alpha is bound to have opinions about its method. This is the reason for the low level commands; so a person can create their own alpha without relying on the tool to have it.

But my focus was actually about the execution order of operations. I'm not a mathematician, but this should sit on a stable foundation - in programming very related is functional programming style, based very much on math here, so there is a lot to learn from

I think this is a fairly stable foundation? Each operation is executed in the order provided. Results from each execution are available in memory to be used in later executions. From the README.md, it's analogous to the following:

const $0 = 42;
const $1 = numbers.seven;
const $2, $value = add($0, $1);

Which is fairly foundational to programming, I'd say.

  1. pipe() through functions, making the output of one, the input of the next one and
  2. curry them to only have functions that take one argument
  1. I'm avoiding the pipe() so that outputs are not complex and don't need to be traversed (parsing individual digits from a hexcode for example).
  2. This seems... even more complex than what I'm proposing? I think I would rather run Math.max once with all of the numbers than run it several times; adding a new argument each time.

Though the config files are written in js, they can import the function directly. That sort of mapping is what needs to be done in the spec for tokens.

Or it doesn't need to? I don't see this as a requirement. In fact I see it as a detriment. Like you mentioned, it's possible that a token processor could be written in another language and that process would also need to know the implementation of alpha (and any other ones as they are introduced over time).


Ultimately, I'm afraid we'll need to agree to disagree about our expectations of responsibility for processing these transformations. In my eyes, I don't want to wait for a tool (eg., Figma) to launch what alpha means, only for it not to meet expectations because the DTCG cannot reasonably define all of these possibilities itself without getting caught up in many opinions. Nor do I want to recreate alpha across processing environments. Instead, I want Figma (or any tool) to process a set of operations in the same way across all tools without waiting for spec author alignment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants