Skip to content

Transaction model

jstray edited this page Oct 30, 2012 · 21 revisions

Transaction model

To support undo and concurrent editing, we update our model following a set of "transactions".

Definitions

The client is the JS code running on the user's browser.

The server is our cluster of web servers. They communicate with the datahorse, another cluster. We call it a datahorse to distinguish it from the database, which is the actual persistent storage layer, currently PostgreSQL.

The model is all the data in a single document set. It is too big to be copied between the database and the server, so only the database can see the complete model. But the server, being a crafty proxy, can query for any parts of the complete model that it needs, whenever it wants, so as far as the client is concerned, the server model is complete. The client model is at the lower end of the food chain.

The model contains several different types of objects: Documents, Tags, and Visualization Objects. Documents and tags are built-in, while visualization objects are created and processed by a visualization plug-in, running on worker, server, and client.

Documents

We model a document object as something with:

  • a method to retrieve the text
  • a method to display in browser
  • metadata, which is an arbitrary set of key:value pairs

Maybe the metadata also allows recursive structures ala JSON. Would be nice. Then it would be easy to model applied tags as an array of tag IDs in the document metadata, which would make tags not a special case from the point of view of the transaction system.

Tags

A tag is a top level object, currently containing a tag name and color. Tags can only be applied to documents. The tag list on a document is distinct from the tag object itself. Ideally, the tag list on a document is modelled using the general document metadata system.

Visualization objects

A visualization object is generated from a document set by a visualization plugin. I imagine it as something like a case class or JSON object, with a unique ID.

For the current tree view, nodes would be the only visualization object (documents are built in to the system, tags are metadata). For a scatterplot view, individual dots in the plot might be visualization objects, each with document ID, position, size, color, first visible zoom level, etc.

Visualization plugins

There are three parts to a visualization plugin. These provide the following APIs:

Worker:

  • CreateViz: document set => visualization objects in the DB.

Datahorse:

  • applyDelta: compressed delta => changed objects and expanded delta in DB
  • changedObjects: version number, object list, view => returns set of objects that have been changed in the DB since version, or created within the given view, with newest values for each object

Client

  • Manages UI drawing and interaction,
  • generates compressed deltas when a makes a change
  • generates expanded deltas from compressed deltas so that changes can be applied (predictively) on the client side

Transactions

All changes are executed in atomic operations called "transactions." The server maintains the definitive current server version and orders transactions received by all clients editing the same document set.

A transaction is expressed as a delta, a change with respect to the current state. Deltas are initially generated as compressed deltas which allows shorthand lists of changed objects, as compressed scopes (see below.) A compressed delta can be expanded with respect to a current version, on either the client or a server. An expanded delta is a complete list of all objects changed, and their changes. It contains enough information to be inverted, that is, undone. It does not reference any objects that are not changed.

Compressed scopes

A compressed scope is a way to refer to a list of objects (documents, tags, visobjs.) A compressed scope is a list of "scope elements", each of which is one of

  • a literal list of object IDs
  • an object query, which selects objects based object type, a set of fields to match exactly, and given match values
  • visualization scopes: these have arbitrary payload and are interpreted by the visualization plugin

For the tree viz, a compressed scope is akin to the current selection object, which is a set of nodes, tags, docs. There could also be a visualization-specific scope for the tree which selects a subtree rooted at a node, down to N levels. A scatterplot viz could use selection rectangle coordinates as viz-scope query

Conceptually, a compressed scope always expands to a set of objects.

View

A view is an object that defines "where the user is looking" in a visualization. The precise definition is that a view is sufficient to determine whether a newly created visualization object would be visible to a client with that view. Views are represented as visualization scopes.

Each visualization has its own current view.

For the tree, the view is the set of open nodes. For the scatter plot, the view is the bounding box and zoom level of the region of the scatterplot currently displayed.

Because a view is a visualization scope, the server side plugin can answer the question, is a given visualization object visible in this view?

Loading objects

The client can ask the server to send any set objects over the wire, specifying the objects via a compressed scope.

Client transaction handling

The client always has a current local version. It is based on the last server version that the client knew of, the base version. The client maintains several ordered lists of transactions:

  • outbox: locally applied changes yet to be sent to the server
  • diffbox: locally applied changes that have been sent to the server but not yet acknowledged as applied on the server side. If the diffbox is not empty, the client version is divergent.
  • delta archive: an ordered list of all expanded deltas applied to the local version. It may overlap with outbox, divergent and the undo stack.

Client transaction generation

The user can make changes to the model either through built-in UI (e.g. the document list) or the visualization. When this happens, the client

  1. Creates a compressed delta, and assigns it GUID (which must be globally unique.)
  2. Expands the delta against the client version, which produces a list of objects and their changes.
  3. Applies the expanded delta to the client version as best it can (a predictive update)
  4. Adds the compressed delta to the outbox
  5. Adds the expanded delta to the delta archive.

Steps 1, 2, and 3 may involve calling a client-side visualization plugin, if the change was made through the visualization UI and/or references visualization objects.

Predictive updates

The client produces a predictive update when it expands a delta locally. The client cannot always completely update the local state after a transaction, because the transaction may alter the state in ways the client cannot compute.

Suppose Alice adds a tag T to all documents in a node N. This requires the following updates to the client state:

  1. Update the count of documents with tag T on all parents and children of N. This can be computed locally because we knew the tag count for T on N before tagging, and now that is the number of docs in N. So we can add the difference to all loaded parents of N.
  2. Update the count of documents tagged with T. Again, we can use the difference between the old and new counts.
  3. Tag indicators on individual documents. Because each document on the client currently has associated with it the complete path (set of nodes that include the doc) we can do this.
  4. Document list for T. The client caches the first ten documents with tag T, ordered by description. This list can change if more documents are tagged, but the client does not have the full list of documents so it cannot make this change.

So, Alice's client can update 1-3 immediately, but must contact the server to update 4. The update of items 1-3 is called "predictive update," and it must be designed to match exactly what the server will produce.

When the client expands a delta, it produces a predictive update. In some cases the client expanded delta will be "complete" in sense it updates all necessary state, but in others, such as this case, we will need to get a later definitive update from the server.

A correct predictive update method will produce exactly the same values that the server eventually broadcasts. This is a great integration test.

Sending transactions to the server

Periodically (but with high priority) the client sends all compressed deltas in the outbox to the server, in order. (They may be batched into a single POST, but the order must always be preserved.)

Server transaction handling

When the server receives a delta from a client, it

  1. Checks that the user has access to the given document set
  2. Validates the delta for syntax and semantics (not checking anything on the database).
  3. Forwards the still compressed delta to the database worker
  4. Returns OK

After the server acknowledges receipt of each delta, that delta is moved from the outbox to the diffbox.

The datahorse is (potentially) a different process on a different machine. The point is to decouple server response from database execution, and provide a clear synchronization point that orders the transactions from multiple clients. When the datahorse receives a transaction from a server, it:

  1. Queues the transaction, if it's busy. This produces the definitive ordering of the transactions. There is one queue per document set.
  2. Translates the compressed delta into instructions for the database (e.g. SQL) to generate and apply an expanded delta
  3. Submits these instructions to the database and ensures they execute.
  4. Increments the server version by one

Steps 2 and 3 may involve calling a visualization plugin, if the delta references visualization objects. The visualization plugin translates the compressed delta into database manipulations on visualization objects, sufficient to apply and store an expanded delta.

Updates

Periodically, the client polls the server to check for updates from other clients, and to complete updates of the local state which cannot be computed locally. This is the most complicated step in transaction handling, and is executed as follows:

  1. The client sends the server a complete list of all locally cached objects, plus the current view for all visualizations, plus the number of the client's base version (this is the last server version seen before divergence.)
  2. The server sends a list of expanded deltas representing the changes made by all clients (including the calling client) since that version. Each expanded delta includes the (changed) contents of all objects which were listed in step 1, plus any newly created (or moved) visualization objects are in the client view.
  3. If the first (oldest) delta send by the client is not the same as the first (oldest) delta in the diffbox, the client rewinds to its last server version by inverting all expanded deltas in its diffbox and applying them in reverse order. The returns the divergent version to the server version.
  4. Then the client applies each delta from the server, in version number order, and applies it to the local state. The base version number is set to the server version number of each applied delta.
  5. During this process, it compares the GUID of each delta with those in its diffbox, and removes any matching deltas from the diffbox.
  6. Each applied expanded delta is added to the delta archive.

The updates retrieved from the server during this process are known as "definitive" updates, as opposed to client-only "predictive" updates.

How the server determines which objects to send in an update

The server forwards the request to the datahorse. This includes the set of objects {C} that the client set, plus a set of views {V} which determine where each visualization is looking, plus the last server version V that the client had.

The datahorse walks through each expanded delta in the database, for all transactions applied after V, in order. For each transaction it intersects {C} with the set of objects in the expanded delta, and accumulates into a final set of updated objects {U}

At each transaction which involves the creation of visualization objects, the datahorse calls the visualization plugin for each view in {V} to ask it whether any of the newly created objects are in view. If so, the objects in view are added to {U}

Finally, the datahorse returns the current (latest) value of each object in {U} to the server, for forwarding to the client.

Undo

For undo, the client needs a bit more machinery:

An undo list, a list which only stores the user's own expanded deltas. (This mimics the outbox more than it does the delta archive.) An undo list pointer, which helps us manage "redo" as well.

When the client performs a transaction, it:

  1. Truncates the undo list, removing everything after the undo list pointer
  2. Appends the expanded delta and its GUID to the undo list
  3. Increments the undo list pointer

When the user hits "undo", the client:

  1. Creates a compressed delta for the "undo" action, which consists of the GUID of the delta the undo list pointer points to
  2. Decrements the undo pointer
  3. Processes this delta using the normal steps in "client transactions."

The actual user-visible undo on the client side is handled by step 3 of client transaction processing, the predictive update. However, we still trigger an update cycle on the server, because (for example) the view may have changed, meaning that objects changed by the undo are newly in scope and we need their values from the server.

Note that for the server to interpret this compressed delta which represents undo, the server must already have received the original delta that is being undone. This is guaranteed by ensuring that the outbox preserves order of submitted transactions.

When the user hits "redo", the client:

  1. returns, if the undo list pointer points to the end of the list
  2. Increments the undo pointer
  3. Creates a compressed delta for the "redo" action, which consists of the GUID of the delta the undo list pointer points to
  4. Processes this delta using the normal steps in "client changes."

Aside from this, "undo" and "redo" transactions are identical to any other. The compressed delta consists solely of a GUID, and both the client and the server can expand it, based on their delta archives.

Locking

The client and server need to implement several locks to ensure consistency in this asynchronous environment.

Server locks:

  1. The datahorse is responsible for ordering the transactions on each document set, and executing them one at a time as essentially atomic operations. Reads to the database during transaction execution must return the previous (last complete) version.

  2. When a server begins an update cycle, as requested by the client, it must be relative to the current version. No part of any subsequent versions can be read during the computation of changed or newly in-view objects, or the retrieval of the values of these objects.

This may have implications for how the database is structured, since it must be possible to retrieve multiple object values at different versions simultaneously, for different clients which have started update cycles at different server versions. Alternatively, client update cycles can be serialized with transactions, so that only one transaction OR update cycle can be processing at any one time, and every update cycle starts at the latest version. This is not as horrible as it sounds, as each document set can be locked independently.

Client locks:

During the update cycle, after the client begins sending all cached updates to the server, the client:

  1. cannot request the value of additional objects from the server, because these values may come from a difference version than the one the client is about to be updated to
  2. cannot make changes to any client side objects (e.g. predictive updates), because these changes may conflict with the rollback-update cycle
  3. The client can only have one active update at any time.

It would be possible to remove client lock 1 if the client is aware of the latest server version, the version that the update will bring the client to, and asks the server for object values at that version. This implies that the client API also takes a version number, and again, that the server is capable of tracking multiple client versions.

Client lock 2 may be much more difficult to remove. It would involve duplicating the current state into two states A and B. Then A would be rolled back/updated while the user sees B and all new predictive updates are applied to B, and also queued as expanded deltas in a box C. When the update is finished, the contents of C can be applied to the updated A, which becomes the new master state (B is discarded.)

Both of these plans still require client lock 3.

Clone this wiki locally