Replies: 3 comments
-
I think this would be difficult to implement, because Entities are really just orchestrators behind the scenes, and allowing two different task hubs would add a bit of complexity. One option you have is to bind to a @cgillum and @sebastianburckhardt, what are your thoughts around this? |
Beta Was this translation helpful? Give feedback.
-
As pointed out by this issue (and others), I think it may be worth to investigate better options for storing the entity state. In particular, allow to keep it separate from the orchestration input (that is internally used to schedule entity operations). Perhaps we could add support for plugging in an external storage provider for entity states, together with a configurable writeback policy (implicit, explicit). This could also go a long way to improve scalability for large entities, and give us better options for configuration entity state caching. |
Beta Was this translation helpful? Give feedback.
-
I haven't thought about this much from a Durable Entities perspective yet, but you may want to take a look at some of the guidance mentioned in our Zero Downtime Deployment documentation. In particular, the last section of this document describes a couple properties that can be used to preserve state across multiple versions of your app. Specifically, the following settings come into play: When running in Functions, you can set these in your host.json file as follows: {
"version": "2.0",
"extensions": {
"durableTask": {
"hubName": "MyTaskHub",
"storageProvider": {
"trackingStoreConnectionStringName": "...",
"trackingStoreNamePrefix ": "MySharedStore"
}
}
}
} You can use the same Storage Account that you're using already or you can put it in a separate storage account (which I believe the documentation leads you to do). The trick is that all versions of your function app should use the same This was primarily intended as a way to continue querying old orchestration instances that were created in older versions, but I don't think it will work to continue executing them. However, it might make it relatively simple to "migrate" entities from one version to the other - possibly by simply updating some properties in Azure Storage. This is something we should look into more closely. FYI @anthonychu |
Beta Was this translation helpful? Give feedback.
-
Is your feature request related to a problem? Please describe.
My application currently creates a new task hub on every breaking build change, as per documented best practices.
While this works well in preventing failures or execution of inconsistent orchestration logic, this results in durable entity states being discarded. Given we use durable entities to maintain application state, losing these entities between builds produces opportunities for data gaps in our application.
Describe the solution you'd like
There are several ways I can think of to address this, such as providing methods to save/load an entity to a named storage blob/table, or allowing entities to exist in a separate task hub from orchestrations.
Describe alternatives you've considered
Given the current implementation of Durable Functions, I’ve considered creating a function that copies entity instances from the task hub by querying the Azure Storage Table directly, but this creates a hard dependency on the implementation of task hubs, which will reduce our application reliability. Given our deployments are infrequent, we decided that data loss is a preferable alternative to potentially breaking builds/deployments in an unrecoverable manner.
Beta Was this translation helpful? Give feedback.
All reactions