Evaluation Table Models

class paperswithcode.models.evaluation.Metric(*, id: str, name: str, description: str, is_loss: bool)[source]

Metric object.

Metric used for evaluation.

id

Metric id.

Type

str

name

Metric name.

Type

str

description

Metric description.

Type

str

is_loss

Is this a loss metric.

Type

bool

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.MetricCreateRequest(*, name: str, description: str, is_loss: bool)[source]

Metric object.

Metric used for evaluation.

name

Metric name.

Type

str

description

Metric description.

Type

str

is_loss

Is this a loss metric.

Type

bool

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.MetricUpdateRequest(*, name: str = None, description: str = None, is_loss: bool = None)[source]

Metric object.

Metric used for evaluation.

name

Metric name.

Type

str, optional

description

Metric description.

Type

str, optional

is_loss

Is this a loss metric.

Type

bool, optional

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.Result(*, id: str, best_rank: int = None, metrics: dict, methodology: str, uses_additional_data: bool, paper: str = None, best_metric: str = None)[source]

Evaluation table row object.

id

Result id.

Type

str

best_rank

Best rank of the row.

Type

int, optional

metrics

Dictionary of metrics and metric values.

Type

dict

methodology

Methodology used for this implementation.

Type

str

uses_additional_data

Does this evaluation uses additional data not provided in the dataset used for other evaluations.

Type

bool

paper

Paper describing the evaluation.

Type

str, optional

best_metric

Name of the best metric.

Type

str, optional

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.ResultCreateRequest(*, metrics: dict, methodology: str, uses_additional_data: bool, paper: str = None)[source]

Evaluation table row object.

metrics

Dictionary of metrics and metric values.

Type

dict

methodology

Methodology used for this implementation.

Type

str

uses_additional_data

Does this evaluation uses additional data not provided in the dataset used for other evaluations.

Type

bool

paper

Paper describing the evaluation.

Type

str, optional

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.ResultUpdateRequest(*, metrics: dict = None, methodology: str = None, uses_additional_data: bool = None, paper: str = None)[source]

Evaluation table row object.

metrics

Dictionary of metrics and metric values.

Type

dict, optional

methodology

Methodology used for this implementation.

Type

str, optional

uses_additional_data

Does this evaluation uses additional data not provided in the dataset used for other evaluations.

Type

bool, optional

paper

Paper describing the evaluation.

Type

str, optional

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.EvaluationTable(*, id: str, task: str, dataset: str)[source]

Evaluation table object.

id

Evaluation table ID.

Type

str

task

ID of the task used in evaluation.

Type

str

dataset

ID of the dataset used in evaluation.

Type

str

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.EvaluationTables(*, count: int, next_page: int = None, previous_page: int = None, results: List[paperswithcode.models.evaluation.table.EvaluationTable])[source]

Object representing a paginated page of evaluation tables.

count

Number of elements matching the query.

Type

int

next_page

Number of the next page.

Type

int, optional

previous_page

Number of the previous page.

Type

int, optional

results

List of evaluation tables on this page.

Type

List[SotaPartial]

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.EvaluationTableCreateRequest(*, task: str, dataset: str)[source]

Evaluation table create request object.

task

ID of the task used in evaluation.

Type

str

dataset

ID of the dataset used in evaluation.

Type

str

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

class paperswithcode.models.evaluation.EvaluationTableUpdateRequest(*, task: str = None, dataset: str = None)[source]

Evaluation table update request object.

task

ID of the task used in evaluation.

Type

str

dataset

ID of the dataset used in evaluation.

Type

str

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.