Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Merge hathora.yml and server/impl.ts #165

Open
CGamesPlay opened this issue Feb 25, 2022 · 4 comments
Open

RFC: Merge hathora.yml and server/impl.ts #165

CGamesPlay opened this issue Feb 25, 2022 · 4 comments

Comments

@CGamesPlay
Copy link

From a private email:

The current hathora.yml DSL is more geared towards the client rather than the server. It defines the data types that the client consumes and the server produces. It also defines the rpc methods.

There are some advantages of having the DSL be decoupled from the server implementation. The first is that you can model your internal server data however you want. This lets you do things like use libraries for the internal state (see chess for example), which is something that would be harder to do if you had to define the internal state in a DSL. The other advantage is privacy — see uno or poker for example where the server knows about everyone’s hands and the deck, but the client only knows about their hand.

I wanted to provide a proof-of-concept for a pure-typescript DSL that allows you to implement all of the features required of hathora.yml: usable for code generation, allows clients to be agnostic about the implementation details of the server (e.g. closed source server), usable with libraries (because it's TypeScript), privacy (via the getUserState function).

My proof-of-concept lives in this gist. I implemented the Rock-Paper-Scissors example to demonstrate. The net result is that I output a JSON blob that matches the loaded HathoraConfig, and there's also a method which returns a compatible Impl class. Some notes points about the proof of concept:

  • We leverage Zod to provide introspectable types. We don't actually validate any objects, although the option is available.
  • Due to the structure of declaring methods next to the Zod schema for their arguments, we can actually infer the types of all method arguments (don't have to type them).
  • Line counts are almost identical if you don't count import lines. If you do, then the proposal is shorter by that amount.
  • The proposal requires use of Zod schemas, which increases the learning curve for those who don't already know it. On the other hand, it does not require YML, which has its own set of pitfalls. Considering that the proposal provides IDE autocompletion for Zod schemas and for the Engine methods, I suspect this nets out as a win.

Would there be interest in adopting something like this as the preferred way of specifying the client interface in Hathora?

@hpx7
Copy link
Member

hpx7 commented Feb 26, 2022

Thanks for the RFC @CGamesPlay!

After thinking about it some more these are my current thoughts:

One design goal of Hathora is to achieve an effective decoupling of client and server. Let's say I'm a Swift developer specializing in adding iOS frontends for existing Hathora projects. I would want a standardized way of introspecting Hathora APIs. I don't care whether your backend is in Go or Haskell or Typescript -- I just want to read your API contract, fire up the prototype UI to play around and interact with your backend logic, and get started on my Swift client. For me, this means that the API definition shouldn't live in the server/impl.go/hs/ts file (at least not only there).

I think the valid point that you bring up is that defining the API today in the hathora.yml file is not ergonomic. For one, you don't get any IDE assistance while typing, since the IDE is not aware of the HathoraConfig spec. Additionally, the yml file format has pitfalls and people may want to use their language of choice to define the api (and perhaps in a way that colocates the method implementations in the same file).

I think there's a way we can get the best of both worlds. I would propose treating the HathoraConfig as the low level API representation and thinking of Hathora as tooling which (a) produces this config and (b) consumes this config.

On the producing front: currently the only way to produce the HathoraConfig is to define a yml representation of it. I think we should expand this by allowing multiple ways to produce the HathoraConfig:

  1. Allow using a point and click UI which guides the user through constructing the HathoraConfig in a valid way
  2. Allow writing the HathoraConfig in your configuration dsl of choice (json, yml, toml, etc)
  3. Allow generating the HathoraConfig from your server/impl file. I imagine some interaction with the cli, like maybe when you run hathora dev --genConfig it knows to read your server/impl to get the HathoraConfig and it overwrites the hathora.yml or whatever at root

On the consuming front, the Hathora codegen engine is a consumer of HathoraConfig and it will continue operating in the same way. There's also the question of how humans are supposed to consume HathoraConfig. Do they just read the hathora.{json,yml,toml} as text? Do they consume it in some kind of UI? Another proposal is to add language server support for the hathora.* files so that the text editor can be more helpful when reading/writing this file.

The advantage of this approach is that it achieves a level of standardization across Hathora repos, regardless of backend/frontend implementation.

Let me know your thoughts.

@CGamesPlay
Copy link
Author

CGamesPlay commented Feb 26, 2022

I think that option 3 actually hits at what makes sense for every API developer: my server is the source of truth, but we need a good interchange format to make a usable API. When I'm building a REST API, this might be an OpenAPI specification (swagger). For GraphQL, you use introspection to create a schema from a running server. For gRPC, maybe it's a set of .proto files. For Hathora it's HathoraConfig.

But I don't actually want to write my own OpenAPI specification, GraphQL schema file, or HathoraConfig1: I want to write my server and have my tooling do all of the translation work. That's precisely what you're describing with option 3, and I think it's the ideal workflow.

Once you have a "game schema" (aka HathoraConfig), then the code generator for whatever can generate based on it. It's the common ground between all of the different languages and tools that Hathora aims to support.

I think the best model to copy is GraphQL: I write a server which presents some API. The server can generate a schema describing the API it presents (GraphQL schema). GraphQL schemas can be saved as text files on disk and parsed by every GraphQL tool; they have structured documentation embedded; and code generators can use a schema (on disk or read from a live server) to generate whatever kind of code they like. The Prototype UI of Hathora is GraphIQL.

Footnotes

  1. Generally, I do actually want to write my own .proto files, but that's because the precise serialization representation is a major goal of the format... which is not the case with my game.

@hpx7
Copy link
Member

hpx7 commented Feb 26, 2022

Right, but you're not arguing against (1) and (2) also being valid options, are you?

I think all modes of producing HathoraConfig can be supported and it's up to the user which style they prefer to use

@CGamesPlay
Copy link
Author

No of course not. A HathoraConfig is a HathoraConfig whether it comes from a file on disk or gets live generated by code. I just think the workflow should support generating it from server code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants