Skip to main content

Hello Team,

can someone provide insight around the solution to stop the event generations based on the condition, let's say- if MDM system receives / source system sends less than 50K records for mastering then use events to propagate the changes to downstream system else send a batch extract file. knowing the fact that we can delete the events after generating but looking to stop the events conditionally. 

 

Hi Srini,

I’m trying to better understand your question about the conditions and MDM events. From what you mentioned, I got that your main question is about MDM output interfaces - Event Handlers and potentially a batch export and some conditional volume-based logic.

Let me try to explain the principles and some options that you may consider.

MDM data ingestion operates on either scheduled batch imports, message stream consumers or web-service inbound RW requests (SOAP/REST). If you recieve 50K records on input as part of 1 load (MDM transaction) they will be successfully processed or the entire transaction will fail and rollback.

The MDM output flows differ based on the used interface:

  • Event Handler (EH)
  • Batch Export

The Event Handler(s) collects during the transaction “committing” phase records that changed to be published afterwards. The EH configuration allows for filtering certain event types, entities, attribute value changes, etc. (see https://docs.ataccama.com/mdm/latest/configuration/event-handler.html#filters). These filters don’t allow conditional processing in the sense of using different processing logic. However, once the events are captured by EH, they are later published (once the load transaction completes) by the defined publishers - typically a plan publisher is used. In the publishing components you have all the Ataccama steps available from the palette. Depending on the used Processor type you can send the events to your target systems using a data flow (combination of steps) of your choice. It is not recommended to use File-based outputs (Text File Writer, Excel Writer) with other than Simple Processor as the file would be continuously truncated and overwritten with each publishing micro-batch.

 

In my opinion a better approach is to create a sort of outbound “stage” component that you can use to orchestrate your outbound events and their method of publishing. Imagine that you could store the records in an intermediate set of tables managed and populated by the MDM. I would use a delta export operation or an event handler to populate such out-stage. Then using an orchestration workflow (*.ewf) I would manage the publishing logic based on the volumes or other conditions (e.g. priority).

Of course the overall design depends on the description of the Use Case and related requirements (+NFR).

 

Let me know if this helped


Thanks for the response @Pele


Reply