Intemic works with structured datasets that contain column headers and rows with the different elements or timestamps, such as Parquet format. Different data sources that contain those formats (like SCADA, ERPs, MESs, and all the information that you can download in csv or excels, can be imported into the Intemic platform.
In Intemic platform exists 4 different ways to ingest data:
1. Uploading datasets from local (excel, csv files, etc…)
2. Using existing datasets
3. Uploading new batch data
4. Connecting to real-time online data sources
The ETL (Extract Transform Load) process is done in Databricks, which uses Apache Spark for a more efficient data processing. When uploaded, some of the column headers are shown in the node, to give a brief idea of what data are you managing, but you can also inspect your datasets when needed.
The filtered and pretreated datasets that you generate in the diagrams, can be referenced and used as many times as you want for other data pipelines that may have other purposes
The filtered and pretreated datasets that you generate in the diagrams, can be referenced and used as many times as you want for other data pipelines that may have other purposes
If you have any questions or need assistance, please don't hesitate to contact our support team at info@intemic.com
The filtered and pretreated datasets that you generate in the diagrams, can be referenced and used as many times as you want for other data pipelines that may have other purposes
Through APIs and other industry-specific protocols like OPC-UA, our platform can be connected to online data sources that continuously come from sensors, or other datalakes that are automatically updated
The filtered and pretreated datasets that you generate in the diagrams, can be referenced and used as many times as you want for other data pipelines that may have other purposes