Global Temperature Time Series

core

Files Size Format Created Updated License Source
3 608kB csv zip 1 week ago ODC-PDDL-1.0 GISTEMP Global Land-Ocean Temperature Index Global component of Climate at a Glance (GCAG)
Global Temperature Time Series. Data are included from the GISS Surface Temperature (GISTEMP) analysis and the global component of Climate at a Glance (GCAG). Two datasets are provided: 1) global monthly mean and 2) annual mean temperature anomalies in degrees Celsius from 1880 to the read more
Download

Data Files

File Description Size Last changed Download Other formats
annual [csv] 5kB annual [csv] annual [json] (13kB)
monthly [csv] 80kB monthly [csv] monthly [json] (186kB)
datapackage_zip [zip] Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 58kB datapackage_zip [zip]

annual  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Source 1 string
Year 2 year YYYY
Mean 3 number Average global mean temperature anomalies in degrees Celsius relative to a base period. GISTEMP base period: 1951-1980. GCAG base period: 20th century average.

monthly  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Source 1 string
Date 2 date (%Y-%m-%d) YYYY-MM
Mean 3 number Monthly mean temperature anomalies in degrees Celsius relative to a base period. GISTEMP base period: 1951-1980. GCAG base period: 20th century average.

datapackage_zip  

This is a preview version. There might be more data in the original version.

Read me

Global Temperature Time Series. Data are included from the GISS Surface Temperature (GISTEMP) analysis and the global component of Climate at a Glance (GCAG). Two datasets are provided: 1) global monthly mean and 2) annual mean temperature anomalies in degrees Celsius from 1880 to the present.

Data

Description

  1. GISTEMP Global Land-Ocean Temperature Index:

Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies [i.e. deviations from the corresponding 1951-1980 means]. Global-mean monthly […] and annual means, 1880-present, updated through most recent month.

  1. Global component of Climate at a Glance (GCAG):

Global temperature anomaly data come from the Global Historical Climatology Network-Monthly (GHCN-M) data set and International Comprehensive Ocean-Atmosphere Data Set (ICOADS), which have data from 1880 to the present. These two datasets are blended into a single product to produce the combined global land and ocean temperature anomalies. The available timeseries of global-scale temperature anomalies are calculated with respect to the 20th century average […].

Citations

  1. GISTEMP: NASA Goddard Institute for Space Studies (GISS) Surface Temperature Analysis, Global Land-Ocean Temperature Index.
  2. NOAA National Climatic Data Center (NCDC), global component of Climate at a Glance (GCAG).

Sources

Additional Data

  • Upstream datasets:
  • Other:
    • HadCRUT4 time series data are not included in the published Data Package at this time because of the dataset’s restrictive terms and conditions. However, the data preparation script supports processing the dataset.

Data Preparation

Requirements

Data preparation requires Python 2.

Processing

Run the following script from this directory to download and process the data:

make data

Hundredths of degrees Celsius in the GISTEMP Global Land-Ocean Temperature Index data are converted to degrees Celsius.

A HadCRUT4 processing script is available but not run by default.

Resources

The raw data are output to ./tmp. The processed data are output to ./data.

License

ODC-PDDL-1.0

This Data Package and these datasets are made available under the Public Domain Dedication and License v1.0 whose full text can be found at: http://www.opendatacommons.org/licenses/pddl/1.0/

Notes

The upstream datasets do not impose any specific restrictions on using these data in a public or commercial product:

Import into your tool

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite")
library("jsonlite")

json_file <- "http://datahub.io/core/global-temp/datapackage.json"
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# access csv file by the index starting from 1
path_to_file = json_data$resources[[1]]$path
data <- read.csv(url(path_to_file))
print(data)

In order to work with Data Packages in Pandas you need to install the Frictionless Data data package library and the pandas extension:

pip install datapackage
pip install jsontableschema-pandas

To get the data run following code:

import datapackage

data_url = "http://datahub.io/core/global-temp/datapackage.json"

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'pandas')

# data frames available (corresponding to data files in original dataset)
storage.buckets

# you can access datasets inside storage, e.g. the first one:
storage[storage.buckets[0]]

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('http://datahub.io/core/global-temp/datapackage.json')

# get list of resources:
resources = package.descriptor['resources']
resourceList = [resources[x]['name'] for x in range(0, len(resources))]
print(resourceList)

data = package.resources[0].read()
print(data)

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'http://datahub.io/core/global-temp/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
(async () => {
  const dataset = await Dataset.load(path)

  // Get the first data file in this dataset
  const file = dataset.resources[0]
  // Get a raw stream
  const stream = await file.stream()
  // entire file as a buffer (be careful with large files!)
  const buffer = await file.buffer
})()

Install the datapackage library created specially for Ruby language using gem:

gem install datapackage

Now get the dataset and read the data:

require 'datapackage'

path = 'http://datahub.io/core/global-temp/datapackage.json'

package = DataPackage::Package.new(path)
# So package variable contains metadata. You can see it:
puts package

# Read data itself:
resource = package.resources[0]
data = resource.read
puts data
Datapackage.json