Hii folks,
We have a requirement to do DQ evaluation on a table that has few billions of records (> 10 billions). We have to perform various checks on this table that includes aggregations for uniqueness check, completeness, data format etc. Making a subset of data also doesn’t help in reducing volume to desirable/executable range as it still has total count in billions.
What would be the ideal way to to DQ on such tables? Your inputs are highly appreciated here. Thanks!


