-
Hi @jmcorreia , I was working on this and added a dq validator check. But when I am executing the code using the below acon I am getting the following error:
Could you please help with this? Thanks & Regards, |
Beta Was this translation helpful? Give feedback.
Answered by
PMRocha
Jan 9, 2024
Replies: 1 comment 5 replies
-
Hello @jaina15 , Currently the dq_specs supports file system and s3 as backends and by default the store_backend for the data quality is set so s3, in order to change this you need to pass it as file_system.
|
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello @jaina15 ,
was the table created before running this process?
You can enable the automatic schema update by setting spark.databricks.delta.schema.autoMerge.enabled to true in your environment. To do this, you can add exec_env at the end of your acon:
Regarding the dq errors, it really depends on your use case. You can filter these lines with no name, or fail the process if it is critical.