ImageNet: Top-performing AI systems in labeling images
What you should know about this indicator
- The top-1 accuracy measure is used to assess how frequently a model's absolute top prediction matches the correct answer from a given set of options.
- Here's an example to illustrate what this benchmark tests: Imagine an image classification model that is presented with an image of an animal. The model assigns probabilities to each potential label and generates its highest-confidence prediction. For instance, when analyzing an image, the model might predict 'Cat' as the most probable label. To evaluate the model's accuracy using the top-1 measure, researchers compare this prediction with the correct label. If the model's top prediction matches the correct label (e.g., if the actual animal in the image is indeed a cat), then the model's prediction is considered correct according to the top-1 accuracy metric. On the other hand, if the model's top prediction does not match the correct label (e.g., if the image shows a dog, but the model predicts a cat), then the model's prediction is considered incorrect based on the top-1 accuracy measure. To calculate the top-1 accuracy, researchers analyze the model's performance on a large dataset where the correct labels are known. They determine the percentage of examples in the dataset where the model's highest-confidence prediction matches the actual label.
- This measure provides a focused evaluation of the model's ability to make accurate predictions by considering only its absolute top guess.
Sources and processing
This data is based on the following sources
How we process data at Our World in Data
All data and visualizations on Our World in Data rely on data sourced from one or several original data providers. Preparing this original data involves several processing steps. Depending on the data, this can include standardizing country names and world region definitions, converting units, calculating derived indicators such as per capita measures, as well as adding or adapting metadata such as the name or the description given to an indicator.
At the link below you can find a detailed description of the structure of our data pipeline, including links to all the code used to prepare data across Our World in Data.
Reuse this work
- All data produced by third-party providers and made available by Our World in Data are subject to the license terms from the original providers. Our work would not be possible without the data providers we rely on, so we ask you to always cite them appropriately (see below). This is crucial to allow data providers to continue doing their work, enhancing, maintaining and updating valuable data.
- All data, visualizations, and code produced by Our World in Data are completely open access under the Creative Commons BY license. You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.
Citations
How to cite this page
To cite this page overall, including any descriptions, FAQs or explanations of the data authored by Our World in Data, please use the following citation:
“Data Page: ImageNet: Top-performing AI systems in labeling images”, part of the following publication: Charlie Giattino, Edouard Mathieu, Veronika Samborska and Max Roser (2023) - “Artificial Intelligence”. Data adapted from Papers with Code. Retrieved from https://ourworldindata.org/grapher/ai-performance-imagenet [online resource]
How to cite this data
In-line citationIf you have limited space (e.g. in data visualizations), you can use this abbreviated in-line citation:
Papers with Code (2024) – with major processing by Our World in Data
Full citation
Papers with Code (2024) – with major processing by Our World in Data. “ImageNet: Top-performing AI systems in labeling images” [dataset]. Papers with Code, “AI Performance on Imagenet” [original data]. Retrieved October 7, 2024 from https://ourworldindata.org/grapher/ai-performance-imagenet