Let’s say a data consumer in my workplace is getting familiar with a data element (as term), and he/she want to know who the data owner is. Since we set the person reference for the roles in stewardship, the consumer need to 1. click the display name of the data owner, see the username of the referred user, and then 2. click that username to see the detail of that user, namely his/her user ID and email address.(1) Is it possible in any way the data consumers only need to click once from the display name of the data owner to his/her user detail?(2) Is the above workaround applicable for other reference objects (e.g. the policy reference)?Thanks a lot.
Checking to see if there is a way to create a “current values only” table in RDM, with no historical records to be kept? I don’t see any option for this in the documentation or One Desktop dialogs.
Currently only 4 sources are supported for data stories and they are mentioned below: Amazon Aurora PostgreSQL ONE Data PostgreSQL Snowflake We also use synapse as source for a lot of catalog items and it would be great if you can add synapse as well to the above.
Is it possible to create a data asset from an ETL job (SSIS or similar) so that this is included in the data lineage to allow for full impact analysis
I have a data set created and the query runs fine in a sql editor against my local database. I can deploy the configuration with no errors, but when I go to look at the data in Ataccama One I get Fatal error occurred and click on Details just gives me Internal error. The only thing I can think of is the query is rather long. Is there a max for data set sql? What is the best way to try to troubleshoot this? The error messages are not at all helpful.
Hi, I notice that for some of my terms, the visual Insights is visible and for others it is not. What is the reason for this?Second question: I would want to automatically assign terms. if the column is named wnplts then I want to assign the term Residence to it. Is there an easy way to do this? Detection rules seem to look at the value and not at the column name.kind regards, Jur Dördregter
Hi Everyone ! I’m Srija ,Solutions Consultant here at ataccama. Today I’m sharing how to profile a catalog item using GraphQL by providing the Catalog item name and type of profiling. Profiling is usually performed to Catalog Items. There are two types of data profiling in AtaccamaOne. Sample Profiling Full ProfilingWe can do profiling using the Ataccama One Web GUI but there are limitations. Consider a scenario where profiling needs to be executed for multiple catalog items in one Go. In such cases, GraphQL can be leveraged within a plan to facilitate the profiling process. By specifyingthe catalog items and the type of profiling (Sample/Full), the profiling operation can be seamlessly performed.The below GraphQL Query helps to run the specified profiling for the specified Catalog Item. This can be executed in the playground/postman tool or can be done in a plan in ONE desktop. Here's the query structure:query ImportCatalog ($connection_id: GID!) {connection(gid: $connection_id) {i
Hey All!I am aware that only Stewardship owners have the ability to use the “Document” button on a data source level, but is there a way that we can disable this button all together? If a user accidentally uses this feature, it eats up a lot of processing to import the metadata, and is a best practice that we are telling everyone not to use. Is there a way within the admin console to do this?Thanks!
Hi,I have certrain fields in my data that should be masked for all users. This is information like social security number and IBAN. Now users can see this data from the sample data tab and from the profile. Can I apply some some masking of these attributes without have to create a VCI or SCI for this? Ideally masking should be applied based on the term assignment: as soon as I map the data to IBAN it should be masked for all users.Regards, Jur Dördregter
Hello everyone, I have a requirement of exporting ONE webapp's audit logs but not using the MinIO storage, is there any other way to export log files through ONE Desktop IDE using any Api calls such as graphql converted into curl command in json call? Thanks!
Hello Ataccama team I am working with ataccama 14.5 , I would like to know if there is a documentation to know more about dpe service. Kind Regards
Hi Community,The purpose of the follwing scenario is to update multiple term names and definitions. I have created a plan that writes the terms names, definitions and GID to a csv file. In that file I can make updates on the names and definitions. With another plan I read that file and write it back into Ataccama.That works well, except in the following case. The text in the Definition can be entered on multiple lines by using the Enter button. So OneWeb will show:Text on line 1.Text on line 2.Text on line 3.In the export (the csv file) the text will look like this:"[{""type"":""paragraph"",""children"":[{""text"":""Text on line 1.""}]},{""type"":""paragraph"",""children"":[{""text"":""Text on line 2}]},{""type"":""paragraph"",""children"":[{""text"":""Text on line 3}]}]"If you import the text like this, the content will not appear on 3 lines, but literally as above and on one line.How can I import the text so it is on 3 lines again?Thanks for any suggestion.Kind regards,Albert
Hey Everyone!We have a use case involving datastage and curious if anyone has worked with this in the past. We have a team who has datastage jobs that are triggered by a table being pushed information downstream. We are wondering if there is a way either in the web or desktop, if we can use the trigger files that are created by datastage, somehow plug into the datastage flow, or another way that would trigger a monitoring project simultaneously to run data quality checks against the load? if certain data is not loaded in correctly, it will break the connection for the two tables, and in turn break the job, and they want to see the population of what is breaking the join. Hope this makes sense! Thanks,Thomas
Hello everyone, We recently upgraded to version 13.9.3, and gained the ability to export DQ rules from one Ataccama environment and import them into another. This works great, but I need to do the same thing for our monitoring projects. I would like to be able to export them from DEV and import them into a higher environment. Unfortunately, there doesn’t seem to be a plan available in my version to export monitoring projects. Has anybody figured out a way to migrate monitoring projects without recreating them in each environment? Alternately, I am considering trying to write my own plan. Is there any documentation out there on how to write the JSON for export plans? Thanks!
Master Data Management & Reference Data Management 🛰️
Data Quality & Data Governance ⚙️
News & Announcements 📣
Ataccama User Lab 🧪
Already have an account? Login
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.