Welcome to EvalAI’s documentation!¶
EvalAI is an open source platform for evaluating and comparing machine learning (ML) and artificial intelligence algorithms (AI) at scale.
It is built to provide a scalable solution to the research community to fulfill the critical need of evaluating machine learning models and agents acting in an environment against annotations or with a human-in-the-loop.
Contents:
- Introduction
- Installation
- Host challenge
- Challenge configuration
- Writing Evaluation Script
- Approve a challenge (for forked version)
- Participate in a challenge
- Make Submission Public
- Make Submission Private
- Pull Request
- Contributing guidelines
- Architecture
- Architectural decisions
- Directory structure
- Submission
- Migrations
- Cite
- Frequently Asked Questions
- Glossary