Global Historical Population

core

Files Size Format Created Updated License Source
2 32kB csv zip 5 days ago odc-pddl Appendix in Joel E. Cohen, *How Many People Can the Earth Support?*, Norton 1996, ISBN 0-393-31495-2
Global historical population data Data The population data starts from -1000000 BC to 1990 with the average number of people. There are several population data from the different reports such as: Deevey,McEvedy and Jones 1978,Durand Low,Durand High,Clark,Biraben,Blaxter,UN,Kremer. Source: read more
Download

Data Files

File Description Size Last changed Download
population 2kB csv (2kB) , json (11kB)
population-global-historical_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 6kB zip (6kB)

population  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Year 1 number (default)
Average 2 number (default) Average number of people in millions
Deevey 3 number (default) Number of people in millions
McEvedy and Jones 1978 4 number (default) Number of people in millions
Durand Low 5 number (default) Number of people in millions
Durand High 6 number (default) Number of people in millions
Clark 7 number (default) Number of people in millions
Biraben 8 number (default) Number of people in millions
Blaxter 9 number (default) Number of people in millions
UN 10 number (default) Number of people in millions
Kremer 11 number (default) Number of people in millions

population-global-historical_zip  

This is a preview version. There might be more data in the original version.

Read me

Global historical population data

Data

The population data starts from -1000000 BC to 1990 with the average number of people. There are several population data from the different reports such as: Deevey,McEvedy and Jones 1978,Durand Low,Durand High,Clark,Biraben,Blaxter,UN,Kremer.

Source: Appendix in Joel E. Cohen, How Many People Can the Earth Support?, Norton 1996, ISBN 0-393-31495-2.

License

Public Domain Dedication and License.

Import into your tool

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite")
library("jsonlite")

json_file <- 'https://datahub.io/core/population-global-historical/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

In order to work with Data Packages in Pandas you need to install the Frictionless Data data package library and the pandas extension:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/core/population-global-historical/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/core/population-global-historical/datapackage.json')

# get list of all resources:
resources = package.descriptor['resources']
resourceList = [resources[x]['name'] for x in range(0, len(resources))]
print(resourceList)

# print all tabular data(if exists any)
resources = package.resources
for resource in resources:
    if resource.tabular:
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/core/population-global-historical/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()
Datapackage.json