Hi everyone,
We have created several monitoring projects in Ataccama, each dedicated to a specific catalog item. Our approach is to create special tables per clientId and measure those within the monitoring project-effectively one monitoring project per catalog item.
Could you please share your thoughts on this approach? Is this a best practice or are there more efficient ways to handle multiple client-specific data quality checks?
Additionally, we want to create individual postprocessing plans for every rule in a monitoring project. For example, if a catalog item has 20 rules, we want 20 separate postprocessing outputs (Excel files) listing the bad records per rule. This is to share these sheets with data stewards who will then fix the data and upload it back.
However, we are encountering a limitation: Ataccama currently does not allow more than 10 postprocessing plans per monitoring project.
What are your recommendations or best practices to handle this? Should we split monitoring projects differently or is there an alternative way to generate and share granular postprocessing results per rule?
Any advice on managing large-scale monitoring and postprocessing workflows in Ataccama would be greatly appreciated!
Thank you in advance!