Evaluation Table Models¶
-
class
paperswithcode.models.evaluation.
Metric
(*, id: str, name: str, description: str, is_loss: bool)[source]¶ Metric object.
Metric used for evaluation.
-
id
¶ Metric id.
- Type
str
-
name
¶ Metric name.
- Type
str
-
description
¶ Metric description.
- Type
str
-
is_loss
¶ Is this a loss metric.
- Type
bool
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
MetricCreateRequest
(*, name: str, description: str, is_loss: bool)[source]¶ Metric object.
Metric used for evaluation.
-
name
¶ Metric name.
- Type
str
-
description
¶ Metric description.
- Type
str
-
is_loss
¶ Is this a loss metric.
- Type
bool
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
MetricUpdateRequest
(*, name: str = None, description: str = None, is_loss: bool = None)[source]¶ Metric object.
Metric used for evaluation.
-
name
¶ Metric name.
- Type
str, optional
-
description
¶ Metric description.
- Type
str, optional
-
is_loss
¶ Is this a loss metric.
- Type
bool, optional
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
Result
(*, id: str, best_rank: int = None, metrics: dict, methodology: str, uses_additional_data: bool, paper: str = None, best_metric: str = None, evaluated_on: datetime.datetime = None, external_source_url: str = None)[source]¶ Evaluation table row object.
-
id
¶ Result id.
- Type
str
-
best_rank
¶ Best rank of the row.
- Type
int, optional
-
metrics
¶ Dictionary of metrics and metric values.
- Type
dict
-
methodology
¶ Methodology used for this implementation.
- Type
str
-
uses_additional_data
¶ Does this evaluation uses additional data not provided in the dataset used for other evaluations.
- Type
bool
-
paper
¶ Paper describing the evaluation.
- Type
str, optional
-
best_metric
¶ Name of the best metric.
- Type
str, optional
-
evaluated_on
¶ Date of the result evaluation.
- Type
datetime, optional
-
external_source_url
¶ The URL to the external source (eg competition)
- Type
str, option
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
ResultCreateRequest
(*, metrics: dict, methodology: str, uses_additional_data: bool = False, paper: str = None, evaluated_on: str = None, external_source_url: str = None)[source]¶ Evaluation table row object.
-
metrics
¶ Dictionary of metrics and metric values.
- Type
dict
-
methodology
¶ Methodology used for this implementation.
- Type
str
-
uses_additional_data
¶ Does this evaluation uses additional data not provided in the dataset used for other evaluations.
- Type
bool, optional
-
paper
¶ Paper describing the evaluation.
- Type
str, optional
-
evaluated_on
¶ Date of the result evaluation: YYYY-MM-DD format
- Type
str, optional
-
external_source_url
¶ The URL to the external source (eg competition)
- Type
str, option
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
ResultUpdateRequest
(*, metrics: dict = None, methodology: str = None, uses_additional_data: bool = None, paper: str = None, evaluated_on: str = None, external_source_url: str = None)[source]¶ Evaluation table row object.
-
metrics
¶ Dictionary of metrics and metric values.
- Type
dict, optional
-
methodology
¶ Methodology used for this implementation.
- Type
str, optional
-
uses_additional_data
¶ Does this evaluation uses additional data not provided in the dataset used for other evaluations.
- Type
bool, optional
-
paper
¶ Paper describing the evaluation.
- Type
str, optional
-
evaluated_on
¶ Date of the result evaluation: YYYY-MM-DD format
- Type
datetime, optional
-
external_source_url
¶ The URL to the external source (eg competition)
- Type
str, option
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
EvaluationTable
(*, id: str, task: str, dataset: str, description: str = '', mirror_url: str = None)[source]¶ Evaluation table object.
-
id
¶ Evaluation table ID.
- Type
str
-
task
¶ ID of the task used in evaluation.
- Type
str
-
dataset
¶ ID of the dataset used in evaluation.
- Type
str
-
description
¶ Evaluation table description.
- Type
str
-
mirror_url
¶ URL to the evaluation table that this table is a mirror of.
- Type
str, optional
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
EvaluationTables
(*, count: int, next_page: int = None, previous_page: int = None, results: List[paperswithcode.models.evaluation.table.EvaluationTable])[source]¶ Object representing a paginated page of evaluation tables.
-
count
¶ Number of elements matching the query.
- Type
int
-
next_page
¶ Number of the next page.
- Type
int, optional
-
previous_page
¶ Number of the previous page.
- Type
int, optional
-
results
¶ List of evaluation tables on this page.
- Type
List[SotaPartial]
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
EvaluationTableCreateRequest
(*, task: str, dataset: str, description: str = '', mirror_url: str = None)[source]¶ Evaluation table create request object.
-
task
¶ ID of the task used in evaluation.
- Type
str
-
dataset
¶ ID of the dataset used in evaluation.
- Type
str
-
description
¶ Evaluation table description.
- Type
str
-
mirror_url
¶ URL to the evaluation table that this table is a mirror of.
- Type
str, optional
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
EvaluationTableUpdateRequest
(*, task: str = None, dataset: str = None, description: str = None, mirror_url: str = None)[source]¶ Evaluation table update request object.
-
task
¶ ID of the task used in evaluation.
- Type
str, optional
-
dataset
¶ ID of the dataset used in evaluation.
- Type
str, optional
-
description
¶ Evaluation table description.
- Type
str, optional
-
mirror_url
¶ URL to the evaluation table that this table is a mirror of.f
- Type
str, optional
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
ResultSyncRequest
(*, metrics: dict, methodology: str, paper: str = None, uses_additional_data: bool = False, external_id: str = '', evaluated_on: str, external_source_url: str = None)[source]¶ Evaluation table row object.
-
metrics
¶ Dictionary of metrics and metric values.
- Type
dict
-
methodology
¶ Methodology used for this implementation.
- Type
str
-
uses_additional_data
¶ Does this evaluation uses additional data not provided in the dataset used for other evaluations.
- Type
bool
-
paper
¶ Paper describing the evaluation.
- Type
str, optional
-
external_id
¶ Optional external ID used to identify rows when doing sync.
- Type
str, optional
-
evaluated_on
¶ Evaluation date in YYYY-MM-DD format
- Type
str
-
external_source_url
¶ The URL to the external source (eg competition)
- Type
str, option
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
MetricSyncRequest
(*, name: str, description: str = '', is_loss: bool = True)[source]¶ Metric object.
Metric used for evaluation.
-
name
¶ Metric name.
- Type
str
-
description
¶ Metric description.
- Type
str
-
is_loss
¶ Is this a loss metric.
- Type
bool
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
EvaluationTableSyncRequest
(*, task: str, dataset: str, description: str = '', mirror_url: str = None, external_id: str = None, metrics: List[paperswithcode.models.evaluation.synchronize.MetricSyncRequest] = None, results: List[paperswithcode.models.evaluation.synchronize.ResultSyncRequest] = None)[source]¶ Evaluation table object.
-
task
¶ ID of the task used in evaluation.
- Type
str
-
dataset
¶ ID of the dataset used in evaluation.
- Type
str
-
description
¶ Evaluation table description.
- Type
str
-
mirror_url
¶ URL to the evaluation table that this table is a mirror of.
- Type
str, optional
-
external_id
¶ Optional external ID used to identify rows when doing sync.
- Type
str, optional
-
metric
¶ List of MetricSyncRequest objects used in the evaluation.
- Type
list
-
results
¶ List of ResultSyncRequest objects - results of the evaluation.
- Type
list
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
ResultSyncResponse
(*, id: str, metrics: dict, methodology: str, paper: str = None, uses_additional_data: bool = False, external_id: str = '', evaluated_on: str = None, external_source_url: str = None)[source]¶ Evaluation table row object.
-
id
¶ Result id.
- Type
str
-
metrics
¶ Dictionary of metrics and metric values.
- Type
dict
-
methodology
¶ Methodology used for this implementation.
- Type
str
-
uses_additional_data
¶ Does this evaluation uses additional data not provided in the dataset used for other evaluations.
- Type
bool
-
paper
¶ Paper describing the evaluation.
- Type
str, optional
-
external_id
¶ Optional external ID used to identify rows when doing sync.
- Type
str, optional
-
evaluated_on
¶ Evaluation date in YYYY-MM-DD format
- Type
str, optional
-
external_source_url
¶ The URL to the external source (eg competition)
- Type
str, option
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
MetricSyncResponse
(*, name: str, description: str = '', is_loss: bool = True)[source]¶ Metric object.
Metric used for evaluation.
-
name
¶ Metric name.
- Type
str
-
description
¶ Metric description.
- Type
str
-
is_loss
¶ Is this a loss metric.
- Type
bool
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-
-
class
paperswithcode.models.evaluation.
EvaluationTableSyncResponse
(*, id: str, task: str, dataset: str, description: str = '', mirror_url: str = None, external_id: str = '', metrics: List[paperswithcode.models.evaluation.synchronize.MetricSyncResponse] = None, results: List[paperswithcode.models.evaluation.synchronize.ResultSyncResponse] = None)[source]¶ Evaluation table object.
-
id
¶ Evaluation table ID.
- Type
str
-
task
¶ ID of the task used in evaluation.
- Type
str
-
dataset
¶ ID of the dataset used in evaluation.
- Type
str
-
description
¶ Evaluation table description.
- Type
str
-
mirror_url
¶ URL to the evaluation table that this table is a mirror of.
- Type
str, optional
-
external_id
¶ Optional external ID used to identify rows when doing sync.
- Type
str, optional
-
metric
¶ List of metrics sync objects used in the evaluation.
- Type
list
-
results
¶ List of result sync objects - results of the evaluation.
- Type
list
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
-