You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
most relevant objects to the user's task are loaded first,
loading process can be interrupted at any time,
objects remain in memory, and are only deleted when the model is destroyed, or loading aborrted.
A use case would be:
user opens a BCF issue
the viewer sets the camera position to the viewpoint in the BCF issue
the viewer immediately loads the objects that are visible from that viewpoint
once those objects are visible, the viewer continues to load other objects that are not visible from that viewpoint
at any time, the user can abort loading; perhaps because they have finished looking at the BCF issue and are moving on to the next issue etc.
Considerations
Metadata
How do we handle metadata?
In xeokit, metadata means the information about the IFC elements. For each model, we create a MetaScene and load into that a collection of MetaObjects, each of which holds the IfcElement ID, name and type, and a collection of IfcProperties.
The xeokit's tree view (TreeViewPlugin) then builds a hierarchy of nodes that represents the containment hierarchy of the MetaObjects. Currently, this happens on one shot, where the TreeViewPlugin expects all the MetaObjects to be all loaded up-front.
If we are to use TreeViewPlugin with streamed loading, then we would need to adapt it accordingly.
We could extend MetaScene to be able to incrementally load more MetaObjects into it.
Instead of have it fire a "built" event when DataModel.build() is called, we could extend it to fire an "objectsAdded" event each time that method is called, where the event contains the IDs of the DataObjects that were added since the last invokation of that method.
Ideas
convert2xkt currently partitions model objects into tiles, called relative-to-center (RTC) tiles. Each object's coordinates are relative to the center of its RTC tile. This mechanism is used in the Viewer to render double-precision coordinates.
convert2xkt could be extended to also output a JSON kd-tree in which each node registers the objects within it. Each node would have the World-space axis-aligned-boundary of it's objects. Each object would be registered as its ID, plus a reference to the XKT file that contains it.
The Viewer would begin by loading the kd-tree JSON.
After positioning its camera (eg per the BCF viewpoint), the Viewer would then find the nodes in the k-d tree that intersect its current view frustum (defined by the camera position), and then begin loading the objects in those nodes.
As the Viewer loads the objects, it packs their geometry and material attributes into "data textures". These data textures are used within the Viewer's shaders to represent and render the objects.
Each individual data texture represents a draw call within the rendering pipeline. For best performance, we want to minimize the number of draw calls, which means that we want to minimize the number of individual data textures.
There is also a certain amount of overhead in creating a data texture.
Each data texture is managed by a renderer "layer", which accumulates data for objects and then builds the its data texture from them.
We could continually add objects to each renderer layer, and then periodically instruct the renderer layer to rebuild its data texture.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
What streaming means in our case
By "streaming" we mean
A use case would be:
Considerations
Metadata
How do we handle metadata?
In xeokit, metadata means the information about the IFC elements. For each model, we create a MetaScene and load into that a collection of MetaObjects, each of which holds the IfcElement ID, name and type, and a collection of IfcProperties.
The xeokit's tree view (TreeViewPlugin) then builds a hierarchy of nodes that represents the containment hierarchy of the MetaObjects. Currently, this happens on one shot, where the TreeViewPlugin expects all the MetaObjects to be all loaded up-front.
If we are to use TreeViewPlugin with streamed loading, then we would need to adapt it accordingly.
We could extend MetaScene to be able to incrementally load more MetaObjects into it.
Instead of have it fire a "built" event when DataModel.build() is called, we could extend it to fire an "objectsAdded" event each time that method is called, where the event contains the IDs of the DataObjects that were added since the last invokation of that method.
Ideas
convert2xkt currently partitions model objects into tiles, called relative-to-center (RTC) tiles. Each object's coordinates are relative to the center of its RTC tile. This mechanism is used in the Viewer to render double-precision coordinates.
convert2xkt could be extended to also output a JSON kd-tree in which each node registers the objects within it. Each node would have the World-space axis-aligned-boundary of it's objects. Each object would be registered as its ID, plus a reference to the XKT file that contains it.
The Viewer would begin by loading the kd-tree JSON.
After positioning its camera (eg per the BCF viewpoint), the Viewer would then find the nodes in the k-d tree that intersect its current view frustum (defined by the camera position), and then begin loading the objects in those nodes.
As the Viewer loads the objects, it packs their geometry and material attributes into "data textures". These data textures are used within the Viewer's shaders to represent and render the objects.
Each individual data texture represents a draw call within the rendering pipeline. For best performance, we want to minimize the number of draw calls, which means that we want to minimize the number of individual data textures.
There is also a certain amount of overhead in creating a data texture.
Each data texture is managed by a renderer "layer", which accumulates data for objects and then builds the its data texture from them.
We could continually add objects to each renderer layer, and then periodically instruct the renderer layer to rebuild its data texture.
Beta Was this translation helpful? Give feedback.
All reactions