Our open-source CM automation language and CK playground enabled the 1st mass-scale community submission of 10000+ MLPerf inference benchmarking results (representing more than 90% of all v3.1 submissions) across diverse models, data sets, software and hardware from different vendors via the cTuning foundation - see the HPC Wire article for more details and get in touch via our public Discord server if you want to automate and optimize your future MLPerf submissions using our open-source technology!
Collective Knowledge (CK) is an open-source, non-intrusive and technology-agnostic toolset to facilitate reproducible research and automate development, benchmarking, optimization, comparison and deployment of Pareto-efficient AI/ML Systems across diverse and rapidly evolving models, data sets, software and hardware from different vendors and users.
Collective Knowledge v3 has been developed from scratch by the MLCommons Task Force on Automation and Reproducibility, cTuning foundation, cKnowledge.org and the community and includes the following sub-projects:
- Collective Mind automation Language (CM) - a simple automation language that helps to implement modular, portable and technology-agnostic benchmarks and applications with a common API that can automatically plug in diverse and rapidly evolving models, data sets, software and hardware from different vendors and users.
- CM scripts - a database of portable, reusable and technology-agnostic automations to modularize benchmarks, software projects and AI/ML Systems.
- Modular Inference Library (MIL) - a universal and modular C++ implementation of MLPerf inference benchmarks.
- Collective Knowledge Playground - an open platform to benchmark and optimize AI and ML Systems via community challenges.
See related ACM REP'23 keynote, ACM Tech Talk and MLPerf submitters orientation to learn more about our open-source technology and long-term vision.
Join our public Discord server to learn how to run and extend MLPerf benchmarks, participate in future MLPerf submissions, automate reproducibility initiatives at ACM/IEEE/NeurIPS conferences and co-design efficient AI Systems.
- CM automation language makes it easier to reproduce experiments from the accepted ACM/IEEE MICRO'23 papers
- CK/CM authors will give a tutorial about CM automation language and CK playground at IISWC'23
- CM automation language and CK playground will help students run MLPerf inference benchmark at the Student Cluster Competition at SuperComputing'23
More events to come soon!
python3 -m pip install cmind
# restart bash to add cm and cmr binaries to PATH
cm pull repo mlcommons@ck
cm run script --tags=print,python,hello-world
cmr "print python hello-world"
This CM script is a simple wrapper to native scripts and tools described by a simple declarative YAML configuration file specifying inputs, environment variables and dependencies on other portable and shared CM scripts:
alias: print-hello-world-py
uid: d83274c7eb754d90
automation_alias: script
automation_uid: 5b4e0237da074764
deps:
- tags: detect,os
- tags: get,sys-utils-cm
- names:
- python
tags: get,python3
tags:
- print
- hello-world
- python
Our goal is to let the community start using CM within minutes!
- CM automation for the new MLPerf submitters
- MLPerf inference automation
- Visualization of MLPerf results
We invite the community to participate in collaborative benchmarking and optimization of AI/ML systems:
- Community challenges (reproducibility, extension, benchmarking, optimization)
- Shared benchmarking results for AI/ML Systems (performance, accuracy, power consumption, costs)
- Leaderboard
- CM automation to reproduce results from ACM/IEEE MICRO'23 papers
- CM automation to support Student Cluster Competition at SuperComputing'23
- CM automation to reproduce IPOL paper
Collective Knowledge Technology v3 (including Collective Mind automation language and Collective Knowledge Playground) was developed from scratch by Grigori Fursin and Arjun Suresh in 2022-2023 within the MLCommons Task Force on Automation and Reproducibility and with many great contributions from the community.
This project is supported by MLCommons, cTuning foundation, cKnowledge.org, and individual contributors. We thank HiPEAC and OctoML for sponsoring initial development.