Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development of explainability module #93

Closed
7 tasks
technocreep opened this issue Oct 3, 2023 · 0 comments
Closed
7 tasks

Development of explainability module #93

technocreep opened this issue Oct 3, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@technocreep
Copy link
Collaborator

technocreep commented Oct 3, 2023

At the moment there is a need to develop a tool that allows explanation of results obtained during experiment. The main point is that in time series classification methods we use generators/bases which transform initial data into valuable features. There are few questions arise:

  1. What features have more influence on classification?
  2. How features transform from one node to another?
  3. How to visually represent the process of time series transformation?

Probably the module should be called like this:

industrial = FedotIndustrial(config)
industrial.fit(train_data)
industrial.predict(test_data)

industrial.explain(some_params) –> vis, plots, diagrams

From my point of view the following steps should be taken in order to achieve success accomplishment of the task:

  • 1. Read and comprehend the existing documentation, papers, and articles related to explainability in machine learning models
  • 2. Identify the algorithms and techniques that will be employed to achieve the explainability objectives
  • 3. Sketch out the high-level architecture and workflow of the module, considering aspects such as input/output formats, integration with existing systems, and scalability
  • 4. Code the core functionality of the explanation techniques, ensuring they produce interpretable and comprehensible results. For example:
  • Implement SHAP values to identify feature contributions in a black-box model.
  • Develop LIME techniques to extract local explanations for predictions.
  • Write unit tests to verify the correctness and accuracy of the implemented components.
  • 5. Develop adapters or API endpoints to ensure seamless communication and compatibility between the explainability module and the existing systems
  • 6. Design and execute test cases to cover different scenarios and edge cases
  • 7. Document the purpose, functionality, and usage guidelines of the explainability module, and provide examples and tutorials

Estimated time to obtain MVP is 30ish days

@technocreep technocreep added the enhancement New feature or request label Oct 3, 2023
@v1docq v1docq assigned leostre and PvtKaefsky and unassigned VabeNIZ Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants