Example Aggregation on UK Gross Domestic Product (GDP)

examples

Files Size Format Created Updated License Source
2 0B csv zip 3 months ago Office for National Statistics - GDP Time Series
This is an example dataset to demonstrate how data transforms work. In this example, we explain how to aggregate a resource. We assume a publisher is already familiar with Data Packages and views specifications (views property in Data Package specifications). Transforming data Data transforms are read more
Download

Data Files

File Description Size Last changed Download
annual 2kB csv (2kB) , json (5kB)
datapackage_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 4kB zip (4kB)

annual  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Year 1 date (%Y-%m-%d)
GDP 2 number Gross Value Added at basic prices: chained volume measures: Seasonally adjusted. Millions of GBP in base period money. Base period=2009. ABMI variable in ONS source.
GDP_Change 3 number (percentage) Gross Domestic Product: Year on Year growth: CVM SA. IHYP variable in ONS source.
GDP_Index 4 number Gross domestic product index. Base period=2009. YBEZ variable in ONS source.

datapackage_zip  

This is a preview version. There might be more data in the original version.

Read me

This is an example dataset to demonstrate how data transforms work. In this example, we explain how to aggregate a resource. We assume a publisher is already familiar with Data Packages and views specifications (views property in Data Package specifications).

Transforming data

Data transforms are specified in resources attribute of views property. Each resource is an object that contains following attributes:

  • "name" - name of the resource as a reference.
  • "transform" - array of transforms. Each transform is an object, which properties vary depending on transform type.

Aggregating data

Under the graph on the top of this page, you can find a table that displays aggregated data. Raw data is displayed in preview section. As you can see we are aggregating “GDP” column to find its min value and “GDP_Change” column for max value. This is described in the second view object of views property:

{
  "name": "table-view-aggregation",
  "specType": "table",
  "resources": [
    {
      "name": "annual",
      "transform": [
        {
          "type": "aggregate",
          "fields": ["GDP", "GDP_Change"],
          "operations": ["min", "max"]
        }
      ]
    }
  ]
}

where in transform property:

  • "type": "aggregate" - this way we define the transform to be an aggregation.
  • "fields" - list of fields for which data aggregation will be applied.
  • "operations" - list of operation names according to list of fields. Options are: "sum", "min", "max", "count", etc. For full reference see https://vega.github.io/vega/docs/transforms/aggregate/#ops .

Descriptor for this data package

This is the full datapackage.json of this dataset:

{
  "licenses": [
    {
      "id": "odc-pddl",
      "url": "http://opendatacommons.org/licenses/pddl/"
    }
  ],
  "name": "transform-example-gdp-uk",
  "resources": [
    {
      "name": "annual",
      "path": "annual.csv",
      "schema": {
        "fields": [
          {
            "format": "any",
            "name": "Year",
            "type": "date"
          },
          {
            "description": "Gross Value Added at basic prices: chained volume measures: Seasonally adjusted. Millions of GBP in base period money. Base period=2009. ABMI variable in ONS source.",
            "name": "GDP",
            "type": "number"
          },
          {
            "description": "Gross Domestic Product: Year on Year growth: CVM SA. IHYP variable in ONS source.",
            "format": "percentage",
            "name": "GDP_Change",
            "type": "number"
          },
          {
            "description": "Gross domestic product index. Base period=2009. YBEZ variable in ONS source.",
            "name": "GDP_Index",
            "type": "number"
          }
        ]
      },
      "sources": [
        {
          "web": "http://www.ons.gov.uk/ons/datasets-and-tables/data-selector.html?cdid=ABMI&dataset=qna&table-id=C2"
        }
      ]
    }
  ],
  "sources": [
    {
      "homepage": "http://www.ons.gov.uk/ons/rel/gva/gross-domestic-product--preliminary-estimate/q4-2012/tsd---preliminary-estimate-of-gdp-q4-2012.html",
      "name": "Office for National Statistics - GDP Time Series",
      "web": "http://www.ons.gov.uk/ons/datasets-and-tables/downloads/csv.csv?dataset=pgdp"
    }
  ],
  "title": "Example Aggregation on UK Gross Domestic Product (GDP)",
  "views": [
    {
      "id": "Graph",
      "state": {
        "graphType": "columns",
        "group": "Year",
        "series": [
          "GDP_Change"
        ]
      },
      "type": "Graph"
    },
    {
      "name": "table-view-aggregation",
      "specType": "table",
      "resources": [
        {
          "name": "annual",
          "transform": [
            {
              "type": "aggregate",
              "fields": ["GDP", "GDP_Change"],
              "operations": ["min", "max"]
            }
          ]
        }
      ]
    }
  ]
}

Import into your tool

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/examples/transform-example-gdp-uk/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

In order to work with Data Packages in Pandas you need to install the Frictionless Data data package library and the pandas extension:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/examples/transform-example-gdp-uk/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/examples/transform-example-gdp-uk/datapackage.json')

# get list of all resources:
resources = package.descriptor['resources']
resourceList = [resources[x]['name'] for x in range(0, len(resources))]
print(resourceList)

# print all tabular data(if exists any)
resources = package.resources
for resource in resources:
    if resource.tabular:
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/examples/transform-example-gdp-uk/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()
Datapackage.json