Our Incremental load operation is taking huge time (2 hrs to process 5-7K records). We have applied all performance tuning things which is there in document. Apart from this we thought of loading data from DW to PostgreSQL (MDM DB) stage table and then will use that stage table as a source in Incremental load. Idea is when the server will be same for both source and target the computation will be faster.
But this direct load job is failing with below error. Kindly help on fixing this. Or please recommend on any other approach.
com.ataccama.nme.core.NmeExecutionException: Plan "C:\Users\EYKU4F\Ataccama\workspace\OmniChannelMDM_Latest\Files\engine\load\EPRISM_stg_eprism.comp" execution failed: runtime.error.jdbc_reader_error - nullcom.ataccama.dqc.model.environment.AbortedExecutionException at com.ataccama.dqc.model.elements.data.flow.RecordQueue.putBatch(RecordQueue.java:173) at com.ataccama.dqc.model.elements.data.flow.RecordQueue.access$100(RecordQueue.java:35) at com.ataccama.dqc.model.elements.data.flow.RecordQueue$InputEndpoint.putBatch(RecordQueue.java:236) at com.ataccama.dqc.processor.internal.monitoring.MonitoringQueueInputPoint.putBatch(MonitoringQueueInputPoint.java:35) at com.ataccama.dqc.model.elements.data.flow.QueueBatcher.flush(QueueBatcher.java:88) at com.ataccama.dqc.tasks.io.jdbc.read.JdbcReaderInstance.run(JdbcReaderInstance.java:162) at com.ataccama.dqc.processor.internal.runner.ComplexStepNode.runNode(ComplexStepNode.java:69) at com.ataccama.dqc.processor.internal.runner.RunnableNode.run(RunnableNode.java:28) at com.ataccama.dqc.commons.threads.AsyncExecutor$RunningTask.run(AsyncExecutor.java:135)