Data preparation is critical for improving model accuracy. However, data scientists often work independently and spend most of their time writing code for preparing data without support for automatic learning from each other’s work. To address this challenge we developed KGFarm, a holistic platform for automating data preparation using machine learning models trained on knowledge graph capturing the semantics of data science artifacts, including datasets and pipeline scripts. KGFarm provides seamless integration with existing data science platforms, enabling scientific communities to automatically discover and learn about each other’s work.
Unleashing the power of Automated
Try the sample KGFarm Colab Notebook for a quick hands-on! Alternatively run setup.py to setup the demo in a local environment!
- Install dependencies
pip install -r requirements.txt
- Connect to the Stardog engine
stardog-admin server start
- Run KGFarm's KG Augmentor to augment the LiDS graph* (for entity extraction and exploration)
python kg_augmentor/augment_LiDS.py -db Database_name
- Start using KGFarm APIs (checkout this use case)
- Automated Data Cleaning
- Automated Data Transformation
- Automated Feature Selection
KGFarm APIs are designed to promote seamless integration with conventional ML workflows. For performing Data Preparation with KGFarm as human-in-the-loop, please refer to KGFarm_tutorial.ipynb. For full automation on profiled data try KGFarm's Pipeline Generator (see the example below).
- Best Poster Award @VLDB SS, 2023, Cluj-Napoca, Romania 🇷🇴
- Best Poster Award @DSDS Workshop, 2022, Montreal, Canada 🇨🇦
For any questions, contact us at: shubham.vashisth@concordia.ca, niki.monjazeb@concordia.ca, antonio.cavalcante@borealisai.com, philippe.carrier@concordia.ca, khaled.ammar@borealisai.com, essam.mansour@concordia.ca