Skip to main content
featured-image

Version 15.4 LTS is here! Updates to Data Observability, ONE AI, MDM & RDM

Hi everyone! 👋 Today, we're very excited to announce the release of v15.4 Long Term Support (LTS) 🎉As always, this release comes with number of new and highly-expected features, updates, and fixes. Let’s get into what’s new👇🟣ONEData Observability dashboardsIf you are new to Data Observability, DO dashboards provide a single place where you can find information on all data quality or schema change issues. You can run data observability manually, close issues, or add newly-discovered catalog items to your observed system.With 15.4 we are bringing a unified approach to tabs in the issues table, expanding the same functionalities to all tabs, namely: On the All Issues tab, you can now search for catalog items, the same way as on other tabs. On the Freshness tab, you can now sort issues by the Detected column, the same way as on other tabs.   Complete DQ Evaluation Rule generation using GEN AIIn our last version 15.3, we brought the ability to create realistic and diverse datasets to test your DQ rules and get ONE AI to help you debug rules by generating example inputs.Now, you can use Generative AI to build full data quality rules including both inputs and rule logic, and quickly test and re-generate rules through new prompts. How?Switch to the Implementation tab of one of your DQ rules and select the Use AI to generate Rule logic and inputs option. Create a plain text prompt using natural language in the Define the rule field - and that’s all! Wait for ONE AI to generate the definition, edit and update if necessary.⚠️ Beware that, accepting the ONE AI input will overwrite any existing rule implementation.With this update any data engineer or steward can quickly begin building accurate rules with less time spent than ever. 🚀📌If you haven’t enabled your AI feature yet and on Cloud, you can see how here: Generative AI in ONE.Add preview and thumbnail screenshots for Report Catalog ItemsIf you are a PowerBI user, this update is for you! Previously, there were no report/dashboard thumbnails available at the CI listing page and added a layer of difficulty to get report previews.Now, we have brought an update that allows you to upload thumbnails and previews for all Report Catalog Items that will override the preview taken from the data source and will be used in your CI listing page. Without adding another subscription ✅Data LineagePreviously, there was limited information about objects imported to lineage repository in Ataccama ONE accessible for the users, and not all users were able to initiate lineage analysis from a catalog item which is not in the data catalog. As well as, the access to the list of dataflows and dataflow properties were limited.We’re happy to share that with v15.4, you can now browse, search, filter & sort and access to the all objects in the lineage repository.How?If you’d like to start a lineage or impact analysis:Open your Lineage assets screen, and search or filter for the catalog item to start the analysis. The analysis can be started simply by opening the item details and then the lineage diagram. For more information on lineage please check out our documentation or let us know your questions in the comments! 🟣MDM & RDMTasksWe are continuing to enhance our Tasks feature following the updates in v15.3 this time with improvements for roles, permissions and visibility.You can now modify task names, descriptions, and severity levels if you have edit permissions in your role. On the other hand, custom permissions will allow you to restrict task visibility only to entities that a role has permission to view. To learn more about roles and permissions, check out Configuring Permissions. In addition to these, we also brought some improvements for the navigation; changed the Assignee and Assignee group lists sorting to alphabetical for easier navigation, and have our task listings in MDM Web App display only one line of description, with full text accessible on hover and searchable using Ctrl+F.Export records with published changesYou can now export records with published changes both through the RDM Web App and RDM REST API and use them for auditing purposes outside of RDM.The export is available in the JSON format and contains information about the record primary keys, its identifier in the table, as well as the changed columns. This allows you to easily identify each record in a unique manner and see which values were modified, when, and by which user.To export records from the web application, open the Change Log tab and narrow down your record selection using filters. Select View and then Export. The exported JSON file is then downloaded to your machine. For details, see RDM Change Log.The Change Log tab now shows whether any filters are currently applied or whether you are viewing the full changelog. This way, you can quickly make sure you are exporting the records you are interested in, as the filters used also determine which records are included in the export file. You can find this information above the record listing. To export the changelog using the API, send a GET or POST request to the new /changelog endpoint. You can filter the records through query string parameters or the request body, respectively. To learn more about it, check out the RDM REST API page. 🟣Additional improvements & updates Remote Execution on Selected DPEs Now you can choose which engine will execute your plan remotely by specifying a DPE label in ONE Desktop. This is particularly useful when you work with multiple DPE instances, such as hybrid DPEs and standard DPEs, enabling you to direct your plan execution to a specific engine. Define which engines can connect to private on-premise data sources or direct demanding or complex jobs to a DPE with more resources available. Native Amazon SQS Streaming Connector New Matching Step Parallelism Configuration   Find out all v15.4 improvements and fixes please check our documentation release notes here.Hope you’ve enjoyed our version 15.4 updates! Please let us know your thoughts, questions or suggestions in the comments below 👇

Related products:MDMRDM
featured-image

[Part II] Version 15.3 is here! Updates to ONE AI, ONE, and ONE Data 🤖

Hi community! This is Part II of our v15.3 release updates ⚡ If you’d like to read Part I check it out here: In this post, you can find out what’s new in ONE and ONE Data in v15.3.So let’s get to it! ONE🤖AI Generation of test dataWith v15.3 we’re introducing number of updates to improve your testing processes - and as always something on top of that!You can now create realistic and diverse datasets to test your DQ rules and get ONE AI to help you debug rules by generating example inputs - saving you time and speeding up your testing process.How?When you open your Test Rule screen, select Generate inputs in the Test section on the Test Rule page. Please note that, at the moment, this is only possible for rules with just one rule condition applied. ONE AI can generate descriptions for your Attributes, Rules, and Business terms with a broader context🎊 ONE AI now offers an extended attribute description with parent catalog item information, increasing the available information on an attribute, rule or term. 📌If you haven’t enabled your AI feature yet, you can see how here: Generative AI in ONE. 🥷Automated data hidingThe data-hiding functionality is one of the most discussed topics on the community. For any Ataccama administrators, there is a need to limit the visibility & access while increasing the usage of the platform and now we are adding automation to that!Data hiding is based on the attribute’s classification and now we are automating it. We have a new user access control mechanism in ONE: data protection classification.Data protection classification enables you to manage access to sensitive and restricted data in your catalog.How?Hide data in attributes containing sensitive information by applying data classification tags to terms. Manage access to protected data for specific groups and users by editing their access level for data classification tags. Define the visibility of profile and DQ insights for protected attributes.To view existing data protection classification tags, go to Data Protection > Data Protection Classifications. To find out more, check out Data Protection Classification.Database Export in Data TransformationsUsing the Database output step you can write transformation plan outputs directly to a Database Table, either overwriting the catalog item data, appending new data, or creating a new catalog item. This includes:Exporting transformed data to a database table. Appending or replacing data. Creating new catalog items within the flow.For more information on how to create data transformations, see Data Transformation Plans.ONE DATACompare and Merge Records with ONE DataPreviously, it wasn’t possible to compare records side by side and see the differences easily or merge them swiftly, for example, when you identify any duplicities.We are excited to share that you can now compare and merge records manually with ONE Data, remove any duplicates, and maintain the quality of your data.Why use the comparison?Remediate data quality issues, resolve failed uniqueness checks. Compare selected records side by side. Merge records with… selected base record value from any other record manual input How?To begin the comparison, select at least two records in a ONE Data table. You can choose the merged value in any of the following ways:Select the value from a record column. Select the value in the Merge preview column or enter a new value manually.To merge the records with your changes, select Merge records.Check out our documentation for more information Compare and Merge Records with ONE Data and step by step implementation guides.Hope you’ve enjoyed our version 15.3 updates! Please let us know your thoughts, questions or suggestions in the comments below 👇

Related products:ONE Data
featured-image

[Part I] Version 15.3 is here! Updates to Monitoring Projects and MDM 🚀

Hi community 👋 We're very excited to announce that version 15.3 is out!We are proud to share we have now launched Data Hiding and Data Slicing features, which enhance access management and performance and they've been asked by many of you. Plus, there are over 250 big and small improvements overall😮Read on to find out what’s new in Part I of the v15.3 updates and let us know if you have any questions or feedback in the comments below 👇👇MDM🔗 Tasks - Clickable links and post actions on workflow transitionsYou can now add a clickable link field to the task description to provide any important and relevant information about the task. This can be linked to your source data or anything you would like to share with the users. The functionality is enabled in the  application.properties  file [MDM Server Application Properties Task Configuration] and the links can be created and updated via REST API.Tasks have now also an exciting new feature!We just introduced the option to define one or more actions that will be automatically executed after the specified transition in the task workflow. You can set a post action to send out a notification after a workflow or task status is changed via an email notification or sending a message to the JMS topic/queue to notify the downstream system. Additional post actions you can set up are:creating a new task updating a recordCheck out our Configuring Workflows and Permissions article to find out more about how to set up post actions step by step.Learn more about the rest of the MDM fixes, improvements, and updates in our Release Notes. Monitoring Projects🍕Data SlicingPreviously monitoring projects were run on the whole table which could be a resource and time-consuming task, especially for tables containing large amounts of historic records that are constantly updated with new data.We’ve now introduced data slicing, which you can use to evaluate a subset of a catalog item based on selected criteria rather than process the entire item with every job - saving resources.Using data slices you can:Optimize monitoring project performance on large datasets with a small portion of new data. Slice a large volume of data by a selected attribute(s) and run monitoring only on this relevant subset of data. Slice by date or string values. Easily define what the subset will be based on, for example, based on an attribute containing date information, or defined by country. This can be dynamic (that is, always based on the previous day or month) or static (based on a specific range).When to use data slices?If you need to slice/filter a large volume of data by the selected attribute and run DQ checks only on the relevant subset of data. If you need to run DQ checks/profiling/DQ evaluation for a filtered, based on predefined criteria, data subset. To see the results of the DQ check runs for a previously filtered data subset, so I can assess the DQ results progression over time. To see the results of the DQ check run for a full table, so I could use it in my project, if required.To learn more about data slicing, please see Create a Data Slice article.✅Additional improvementsYou can now assign auto-generated rule suggestions on the monitoring project rule suggestion page.And…hide non-matching rules in monitoring projects while focusing on relevant attributes with DQ checks applied.Stay tuned for Part II where we talk about data hiding, updates to ONE AI, and more!If you have any questions please let us know in the comments below 👇👇👇

Related products:Data QualityMDM
featured-image
featured-image
featured-image

Ataccama documentation has a new home

We're thrilled to announce that Ataccama product documentation can now be found at https://docs.ataccama.com for version 14.5.x and later.In addition to the visual refresh and updated branding, the new documentation portal features better search, improved navigation, and feedback options for a smoother and more comfortable user experience. Moreover, our documentation is now open to the public*, so you can find answers to your questions even faster.*Exceptions include upgrade guides and security-related information, such as Security Advisories, which remain available only to Ataccama customers after authentication.What if I'm using version 14.4.0 or earlier?Versions prior to 14.5.0 are available only to Ataccama customers. You can find them at https://support.ataccama.com/downloads. Log in using your existing credentials.What if I can't find my product at https://docs.ataccama.com?This means that the migration for this product is still in the works and it will take us a few more weeks before it's completed.In the meantime, you can access the documentation in PDF format at https://support.ataccama.com/downloads. Log in using your existing credentials.Where do I find installation and upgrade packages?Log in to https://support.ataccama.com/downloads using your existing credentials, then find the version you're using.Give us feedback!Given the scope of the project, there is a chance you run into some hiccups and spot imperfections. Please let us know about these issues and help us make the portal even better.We hope you enjoy the new documentation experience and look forward to your feedback! ❗Some of the links shared on the community might not be accessible due to the documentation portal transition, please get in touch with us to request an up-to-date documentation link or access to previous resources. 

featured-image

Data Integration in v15.1 🚀

Hi community 👋Curious about what’s new on Data Integration features in v15.1?We have updates bringing you a streamlined deployment, enhanced customization, and extended capabilities to elevate your data processing experience. Read on to find out more 😎 Self-service deployment in managed Kubernetes Easily deploy data-processing capabilities to your Kubernetes infrastructure with no manual setup required. Our new entirely self-service update allows you to integrate with the platform's cloud services much more easily. This feature primarily focuses on ease in deployment if you are operating in your own Kubernetes cluster, it also minimizes the amount of data leaving its premises and adding another layer of privacy & security 🔐 Custom file import settings📂Tailor CSV and Excel file import settings for individual files, folders, or entire connections, providing enhanced flexibility. You can now, populate your Data Catalog with correct metadata or object storage connection and fix the file import options so that every file can be imported and processed correctly each time.How it works?Grant access only to the file import settings. Delegate updates and edits on catalog items and the file import settings for the file, any (sub)folder or whole connection. It’s also possible to change the settings for a file while only having the preview of data read with the modified settings. Orchestration Server updatesWe now have more but simplified capabilities for the Portal Orchestration Server enabling additional connectivity to Azure Data Lake Storage 2, Google Cloud Storage, Google Drive, Kafka, and Salesforce.That’s not all! There is an extended monitoring update with business metrics for workflows, tasks, and plans, including running/max and successful/failed indicators.Learn more about the improved configurations and file handling and debugging. Questions? Thoughts? Share them in the comments below 👇

Related products:DI
featured-image

Updates to Data Governance v15.1 🏢

Hello community! 👋 We hope that you are excited about all the updates and here is another one!Get ready for the latest in Data Governance with version 15.1, in which we are both introducing new features while improving some existing functionalities 🚀 What’s new?Tableau Server Connection in Report Catalog:In addition to the Tableau Cloud connectivity, you can now connect ONE Catalog with the Tableau Server (On-Premise). The connection is unidirectional and allows you to store your reports preview of the assets created in Tableau. How it works?Create the connection to ONE with your BI Tool (Tableau). You can follow the instructions described in Tableau Connection, and import reports. These can be on several levels: the whole Project, a Workbook of the particular project, a Data Source of the workbook, a particular Sheet, a Dashboard, or a Story. You can either select the whole project or click to expand the list of workbooks. Review & Sign Off Term Suggestions in One Place:Save time with the new Term Suggestions under the Knowledge Catalog for processing AI term suggestions. You can now see, approve, and reject them from a single place. Suggestions processed here are automatically published. How it works?Filter out the suggestion results according to your business needs and review the term suggestions available. You can approve or reject in the Action column with one click. Review Workflow Usability Improvements:You can now use more granular task assignments in the Review Request Workflow and have reviews done by specific roles or users. If there are any changes in the team structure or access levels, the right people can see the task and reassign it. Streamline approval of the changes you make and minimize room for errors with the enhanced workflow. How it works?You can now assign roles in the Review Request Workflows, and ensure security and access is given to the right users. If there are changes in the team structure or access levels, you can reassign the task as well.  Enhanced Metadata Import:Now, you can reliably scan even very large data sources, and access the results faster than before.How it works?If you have a large-scale data source in your data landscape, you can import their structure (metadata) in one go easily. The limits of assets in one data source were improved by order of magnitude for example, for common databases you can import up to 850,000 catalog items (tables) at once.There is also a significant improvement in the metadata importing speed up to 3x faster than before ⚡ Let us know if you have any questions or feedback on the latest updates in Data Governance in the comments below👇

Related products:Data Governance
featured-image

Updates to ONE Data v15.1 ✨🎨

Hi everyone, We are so happy to share more exciting updates in v15.1 🚀 Read on to find out more about what’s new on ONE Data in the latest version 👇Let’s dive in 🤿 Remediate DQ issues from Monitoring ProjectYou can now import data and DQ results from Monitoring Project (MP) into ONE Data in a self-service way without any IT support to define post-processing plans using ONE Desktop. Your data and DQ results will be loaded to ONE Data in each Monitoring Project run with an option to choose to import All Records vs Only Invalid Records. UX improvements 🎨We also have multiple UX improvements from the ability to fill in multiple cells at once, enhanced keyboard shortcuts, DQ filter behavior, and more!  Stewardship in ONE Data tables 👥You can now manage access to ONE Data tables through stewardship and also change the access levels when you create a new dataset. If you are creating a table from scratch instead of loading an existing catalog item or importing a file, change the stewardship from the table Overview tab.Top tip ✨ assign ownership of your tables when creating a new ONE Data table to prevent any unauthorized access. If it is not set on the table level, the stewardship configuration is inherited from the data source. Overwrite existing ONE Data tables ✍️When loading data to ONE Data from another catalog item in ONE, you can now overwrite an existing table instead of creating a new one.To do this, select Overwrite existing ONE Data table and choose the table you want to replace. Keep in mind this deletes all existing data from the selected table.Data operations within the platform are updated to distinguish from operations exporting data outside of it. We have a new control design and terminology upgrade. Load to ONE Data function as a standalone operation for easy access. Let us know what you think of the newest improvements in the comments below 👇

Related products:ONE Data
featured-image

Updates to Data Stories v15.1🎇

Hi community 👋In this post, find out the latest updates for Data Stories in v15.1, such as seamless navigation between Data Stories and the Knowledge Catalog for a unified experience and the enhanced Visualization Builder with condensed layouts, and new functionalities.Now you can use Data Stories more intuitively and efficiently for data storytelling🚀Let’s dive in!What’s new with Data Stories?Seamless navigation to Knowledge Catalog📚Link to Knowledge Catalog in Data Stories navigation:Navigate between Data Stories and the Knowledge Catalog from the was never easier with direct access to your Knowledge Catalog.  Visualization Builder 🎨Updated navigation and attribute list:A more condensed layout designed to fit displays with smaller resolutions, ensuring a more efficient and visually pleasing experience.Enhanced data view:Source catalog item or Data Quality (DQ) result names prominently displayed above raw data in the table, providing a clear reference to data sources.Ability to rearrange tabs in the Visualization Builder:When you have multiple visualizations in your collection, it’s not always easy to find the ones you need. Now, you can organize your tabs in collections for easy management and group them by topic or visualization type.Ability to define segments in the KPI widget:The KPI widget now allows you to define specific performance ranges visually, providing a clearer representation of key data points.Top/Bottom values feature refresh:The formerly named Top N feature has been rebranded to Top/Bottom Values, accompanied by a refreshed UI and an information box explaining its impact on visualization results. The support was also extended to Pivot Table. Additional ImprovementsEffortless visualization copying:Management of visualization has been made easier with the new features that allows you to copy or move a visualization from one collection to another. This feature not only saves you time but also helps you better organize your visualizations by grouping them according to your needs.Localization support:You can now use Data Stories in English, German, French, Italian, and Czech 🗣️ Your ONE language preference will be reflected on Data Stories and you can always change it under User Settings.We hope that you are as excited about these upcoming enhancements as we are! They are built to make your Data Stories experience even more intuitive, efficient and tailored to your specific needs.Stay tuned for more exciting announcements and share your thoughts on the latest Data Stories features in the comments 👇

Related products:Data QualityDS
featured-image

ONE AI is here 🚀

Hi everyone 👋Get ready for some exciting news🥁🥁🥁This release is probably one of the most awaited updates for Ataccama ONE - we are so excited to share that ONE AI is finally here!ONE AI can generate DQ Rules, create and interpret SQL queries, automate routine tasks, and more, allowing your teams to devote their time to more strategic and value-driven initiatives, while simultaneously making trusted data easily accessible to all users.AI-Powered Data Quality ✅ Generate complex DQ rules from plain text or apply AI-driven rule suggestions Empower any user to improve DQ effortlessly by providing recommendations and creating actionable DQ rules via plain text conversions – no need to code. ONE AI can generate rule suggestions based on your metadata and profiling results, and you can decide & approve new rules according to relevancy and your needs. It’s also possible to use ONE AI to automatically detect the most relevant DQ rules for your specific datasets and create ONE Expressions in rule implementation.  Assisted Rule Management, Anomaly Detection, Time Series Analysis and Freshness Monitoring are just a few other highlights of the AI-driven data quality features, making data rule creation and assignment much easier.  AI-powered Data Governance📖Generate Table DescriptionsAutomatically generate descriptions for your table catalog items to streamline the data catalog management, enhancing data asset documentation. Now, you can perform data documentation much faster leveraging the new ONE AI for automated asset categorization, classification, and description creation. Use AI to fill in a table description in the database or catalog, and help your team members facilitate through various tables in ease. Don’t forget to save before leaving! SQL generation 🪄Create SQL queries using natural language Interpret existing SQL queries for better understanding Perform SQL-to-text explanationsSay goodbye to the complexities of SQL!When you need to create specific data assets for tracking the Data Quality and anomalies on the partition of table you’d initially use SQL.Now, you can use ONE AI and express your data needs in plain text and let ONE AI to generate the correct SQL syntax for the specific database. ONE AI can also interpret existing SQL query that might have been created by any of your colleagues and turn into plain text for ease of understanding by anyone. Chat with our documentation 💬Get simplified documentation access; chat with 100+ docs available on any app page Ask your questions and let the AI help you with navigating through our documentation with follow-up’s and resourcesYou can now ask questions and query data using plain language in ⚡ speed. Don't dig through the long product documentation. Simply ask what you want to know and ONE AI will answer it for you.For a detailed overview of the ONE AI v15.1 features, check out Generative AI in ONE.Stay tuned for more updates and share your questions and feedback on the ONE AI below! 👇To find out more, please contact us here, or reach out to your Account Executive. 👋

Related products:Data GovernanceData Quality
featured-image
featured-image
featured-image

14.5 Updates: MDM & RDM 🤟

Hi all!We hope that you have enjoyed learning more about the recent updates & improvements on version 14.5. If you haven’t had the chance to check it out, you can find them below 👇In this post, I’ll be covering the latest changes in MDM & RDM on v14.5. Read on to find out what’s new 👀 Master Data Management:Task Management REST API 📃We have introduced a REST API for task management in MDM. Using REST interfaces, you can perform operations on individual or multiple tasks. The REST API currently supports the following operations:Retrieve tasks for a specific record or based on specific parameters. Retrieve details, comments, history for a specific task. Assign a task. Add a comment to a task. Delete a task. Move a task to a different workflow state. Discard a task. Create multiple tasks. Update multiple tasks.Server Operations and Resetting the Environment from Admin Center 🚜We now have a new functionality that enables you to stop and start the MDM Server from the Admin Center, reset the environment, as well as enable automatic synchronization of the lookups from MinIO to locally available folders after every restart.The new options are available on the Server Dashboard tab under Administration in the Admin Center. Reference Data Management:UX Enhancements 🎨Publish when Creating or Editing a RecordYou can now publish changes immediately after creating or editing a record instead of first moving the record to publish and then publishing. This is particularly useful for smaller teams with no dedicated roles for approving changes.To publish changes from the create or edit dialog, select Publish or choose Publish from the actions menu. Timepicker Added for Datetime Attributes ⏲️Use the timepicker to select a time more easily when creating or editing a record. The following elements are supported:Hours (either 12-hour or 24-hour clock) Minutes Seconds AM/PM switch   Revert Single Record Change ⏺️For more flexibility when editing, you can undo a change made to a specific attribute value and restore it to the last published state. To do this, in the Edit dialog, select the orange dot to compare the current and the published value, then choose Revert to published.In addition, when reverting all changes made to records, this action is now called Restore instead of Undo. Checkbox for Boolean ValuesAttributes of Boolean type can now also be set using a checkbox.Other Improvements ↗️Edited records are consistently labeled with an orange dot when viewing record details. Information about record validity is clearly visible: valid records are marked with a green tick and invalid records with a red exclamation mark. We have renamed the following elements: The Confirmed viewing mode is now called Published. In RDM REST API, while Published is the preferred term, you can continue using Confirmed interchangeably. The Waiting for confirmation state in workflows is now called Waiting for publishing. When comparing the current and the published version of an edited record, the Old value and Original value are now called Edited value and Published value. Improved Record Validation ✔️Schedule full validation on all records that will run when you restart the application. Previously, full validation was performed when RDM was first started as well as on every restart. To do this, go to RDM Admin Console and select Schedule revalidation, then restart the application.Records are now validated at the application restart only if a previous validation failed on a particular table or the table structure was modified, which helps speed up the startup time.If you change your mind before restarting RDM, you can cancel revalidation after you schedule it by selecting Disable revalidation from the same screen. Fixed Permissions Editable in RDM WebappIf your RDM permissions are provided using the configuration model, you can now edit them directly in the web application. Once you make any changes to permissions, this custom configuration is applied instead of the roles defined in the configuration model. And that’s all for 14.5 updates, let us know what you think of them in the comments below 👇

Related products:MDMRDM

14.5 Updates: ONE Data 🟪

Hi all!In this post, I’ll cover the latest features and updates for ONE Data in 14.5 ⚡If you have missed the previous posts on 14.5 updates & new features you can check them out here 👇 Let’s get into it! Data Deduplication ♊You can now deduplicate your datasets to easily create managed reference data. Once your reference data is ready, use it in DQ rules to continuously improve the quality of your data.How? Start by opening the catalog item from which you want to extract reference data, then expand the three dots menu for a particular attribute and select Create reference table.Choose your deduplication key (one or a combination of several attributes) and select any additional attributes that help you better describe your data. In addition to these, the table will also contain a frequency attribute, which stores the number of occurrences for each record.Review and update your data as needed and apply it in DQ rules.  DQ Metadata Automatically Updated 🔁DQ metadata used in DQ filters is automatically updated each time you run a DQ evaluation and each time you edit a record (except for bulk edits). This ensures you're working with the latest DQ results available, which in turn facilitates the process of data remediation. It also allows you to use DQ filters on any ONE Data table, not only those created by importing invalid records from an existing catalog item.If you edit one of the filtered records, it is validated on the fly and removed from your filtering results if it no longer fits the criteria. Additionally, a warning is displayed, informing you that the overall quality of your table might be outdated.  UX Improvements 🎨 Changes to Multi-edit Values OptionThe Multi-edit values option is now called Bulk edit. To help prevent unwanted edits, you can see how many records are affected by your action as well as whether you are editing an entire attribute or a selection of records (filtered or not).If you are editing a selection of records, you can also easily select which attribute to edit without having to select the records again. Support for Copy-pastingCopy and paste cells for faster inline data editing. Pasted cells are evaluated on the fly, allowing you to quickly spot issues in the data. When copying data between attributes of different types, cells are converted to the corresponding data type if possible, otherwise your action is ignored.In addition, pasting cells overwrites any existing data and automatically adds additional rows if needed. Improved Support for Microsoft Excel FilesWhen importing a Microsoft Excel file with more than one sheet, you can now choose which sheet to import. Select sheet option in the file import configuration or the first sheet in the file is selected. Improved Table CreationCreating an empty table now produces a template that you can further edit. Rename the table and modify the table structure by adding, removing, or renaming attributes, then start entering your data.We have also reorganized the table creation menu. Selecting Create table in ONE Data now instantly generates a new table.  Easier Attribute and Table RenamingWe have made it easier to rename ONE Data tables and attributes. To edit, double-click the name in the header, provide a new value, and press Enter to save your changes. Stay tuned for more!

Related products:Data GovernanceData Quality
featured-image

14.5 Updates: ONE 🟣

Hi everyone!This week, we’ll continue covering the latest features and updates in 14.5 ⚡Today on the menu we have ONE updates and features in 14.5. Let’s dive in 🤿DQ Firewalls 🧯DQ Firewalls allow you to apply data quality rules to your data using API calls, specifically, the evaluation APIs provided by the new DQF Service. Both GraphQL and REST options are available.This allows you to be able to maintain one central rule library and use Ataccama data quality evaluation rules on your data in real-time in the data stream. For example, you have an ETL pipeline in Python that processes data, and you want to make sure that it filters out invalid records. After defining the DQ rule in ONE, the pipeline for each record (or batch of records) can call the DQF endpoint, and records will be split up by their validity.DQF ServiceWith 14.5 we have a new DQF service that handles rule debug in rules and in monitoring projects, as well as allows live data quality validation in ONE Data and the new DQ firewall feature.Two types of API are provided by the service:Management APIs: These APIs configure the service itself. They are primarily called by the Metadata Management Module (MMM) to create a new DQ firewall, get DQ firewall statistics (such as average execution time, number of calls, pass and fail ratio), configure global default authentication and ad hoc rule evaluation. Evaluation APIs: APIs that accept actual data and return results of DQ evaluation, that is, the calls used for the DQ firewall feature. They are always bound to a specific firewall configuration, which also defines authentication.Freshness Checks Added to Data Observability ✅Freshness is an essential factor for good data quality and it indicates how often data is updated at the source. You can now set thresholds for freshness either manually or using AI, and the system checks your data source for updates, alerting you when a table is updated late or not at all.On the dashboards, you can see information such as the number of missing updates, the time between the last update and the last check, the expected time between updates, and the detection type (AI or manual). Home Page 🏡We now support two additional widgets:Getting started with Ataccama: Widget containing links to onboarding sections within our documentation and community webpages. Tasks: Manage and track your tasks, assign tasks to others, set due dates, and monitor progress.Additionally, access to landing pages is now managed through a new View page access level. Viewers with roles that include this access level can view landing pages and their content but cannot make any changes. New Default Templates for Notifications 🔔Slack, MS Teams, and email notifications now use new default templates. Custom templates from older versions will still be useable after upgrade, but if you want to make further edits to custom templates, you must convert them to the new template syntax.Improvements to MANTA Lineage Integration 🖇️Thanks to all community members who have contributed with their time and feedback here, we have updated how you import and explore lineage metadata from MANTA in ONE.Previously, browsing lineage in ONE required a direct connection to MANTA. With 14.5, you first generate a compatible snapshot of lineage metadata in MANTA and then import and map it to relevant data sources in ONE. We have also expanded the support for lineage to additional connection types.We have redesigned the catalog item Lineage tab to make the navigation more intuitive and introduced new exploration options such as searching (or filtering) source or target items, opening the selected item in MANTA viewer, or viewing transformation details for attributes that are subject to complex transformation. Group Isolation 🌿You can now create isolated groups or branches, which consist of an isolated group and children. When a group is isolated, members of the group or branch are restricted from sharing assets and assigning stewardship to assets outside of their group or branch.| Support for Apache Parquet 👍We have added support for the following Apache Parquet assets: files, tables, and partitioned tables. When assets other than Parquet files are imported, ONE analyzes the asset and then creates a catalog item with the attributes based on the asset. Support for Microsoft Excel 🔢You can now work with Microsoft Excel files in ONE! When you import a Microsoft Excel file, each sheet is loaded as a separate catalog item, while the file itself corresponds to a location within your data source. This way, you can see all catalog items imported from the same Microsoft Excel file.Check the catalog item Relationships or Overview tab to see a list of other catalog items from the single file import. You can profile and evaluate catalog items imported from Microsoft Excel files as you would any other catalog item. Previously, Microsoft Excel files could be imported to ONE, but no further processing was possible. Support for Italian LanguageYou can now change the language of the application to Italian.Audit Log Retention Changes 🪵By default, audit logs are now stored in an internal audit database for 90 days in newly installed environments and upgraded environments without existing audit logs. In environments with existing audit logs, the default retention is extended to one year.You can now change the default retention and cleanup settings for the audit database directly from Audit. You can also set up a schedule for exporting, retention, and cleanup of audit logs in a designated ONE Object Storage (MinIO) bucket. You can also manage access to the Audit module with newly added identity provider (Keycloak) roles.Connect to Multiple Databricks Clusters from a Single DPEYou can now connect to multiple Databricks clusters from a single DPE with new plugin.metastoredatasource.ataccama.one.cluster.<clusterId> property patterns you can specify in the dpe/etc/application.properties file. To use multiple Databricks clusters, properties related to each cluster configuration should use the cluster ID associated with that cluster.Cancel All JobsA Cancel all jobs button has been added to the DPM Admin Console. When selected, all jobs that are not in the state KILLED, FAILURE, or SUCCESS are cancelled and can't be resumed.DPE Label in Run DPM Job Workflow TaskWhen creating a Run DPM Job workflow task, you have the option to set a DPE label that is then matched against labels assigned to available DPEs. What are your thoughts on the latest updates and features? Anything you are excited to try? Let us know in the comments below 👇

featured-image

14.5 Updates: Data Stories 📊🤝✏️

Hi everyone!This week is an exciting one at Ataccama - version 14.5 is out and about! We will be sharing what’s new on the community starting today with Data Stories. So read on to find out what’s new ✨ What are Data Stories?Data Stories is a data visualization tool designed to create, present, and share complex data as dynamic and self-explanatory reports.Data Stories offers two types of reports: Dashboards and Stories.Both Dashboards and Stories have sections with visualizations and static content. While static content allows you to add text, images, or video, visualizations allow you to create bar, line, or pie charts, in addition to pivot tables and more.You can use Data Stories when you need a tool…to create data-driven and dynamic presentations where it’s easy to showcase results, and ideas and display dynamic insights. to build dashboards with real-time data which allows stakeholders to understand what is happening at the moment and also to quickly find any answers to potential questions.  What’s new?📊Enhanced VisualizationsCreate visualizations and share them across multiple reports. Visualizations are now standalone and each visualization can be used in multiple reports. The updated wizard-led process makes it much easier to build, duplicate, customize, or remove visualizations.   Streamlined user collaboration using Collections. Collections are folders for storing unlimited visualizations that you can engage with or contribute to. You can also manage multiple visualizations at once using tabs and apply visualization-specific filters. Build visualizations on top of DQ results and customize your attributes. DQ results are now available in Data Stories! Use them as out-of-the-box datasets that help you assess data quality across monitoring projects and catalog items.   Improved customization. You can define both custom metrics and dimensions, pin certain values, and formulate custom queries.📈Changes to ReportsRevamped Dashboard creation. You can now create dynamic dashboards with an interactive, responsive grid layout, add visualizations, and enrich them with static widgets such as videos, images, and text.   Enhanced Stories creation. With 14.5 you can craft compelling, data-driven narratives through slide-based presentations, customize transitions, and incorporate visualizations and static widgets.   Refreshed UI. We have uplifted the look with updated features such as advanced filters and expanded customization options to make creating reports even easier.   🤝 Sharing and CollaborationYou can now control how collections and visualizations are shared through role assignment and ownership. As the collection creator, you are assigned as its owner and can grant or remove roles such as Reader or Editor within the collection. If needed, you can also transfer ownership to another user or group of users.Another improvement here is you can now share your reports with your team members and other users to increase collaboration and visibility. 📌 New icon on Ataccama ONE DQG suiteData Stories is now seamlessly integrated with ONE GEN2, which allows to building of custom reports on top of the data quality information.⚠️ After the the upgrade, your existing reports are migrated, please check out the documentation to find out more about the migration process.What do you think of the new updates? Let us know in the comments below 👇

featured-image
featured-image
featured-image