List of continent codes

core

Files Size Format Created Updated License Source
2 4kB csv zip 2 months ago
A list of the seven continents with English names and short, unique and permanent identifying codes. Data Data provides list of continents with their two letter codes. Data is made manually, according to several sources from internet, that use this kind of format for continent two letter read more
Download

Data Files

File Description Size Last changed Download Other formats
continent-codes [csv] continent codes 105B continent-codes [csv] continent-codes [json] (105B)
continent-codes_zip [zip] Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 2kB continent-codes_zip [zip]

continent-codes  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Code 1 string
Name 2 string

continent-codes_zip  

This is a preview version. There might be more data in the original version.

Read me

A list of the seven continents with English names and short, unique and permanent identifying codes.

Data

Data provides list of continents with their two letter codes. Data is made manually, according to several sources from internet, that use this kind of format for continent two letter codes. Several sources that use this format:

Import into your tool

In order to use Data Package in R follow instructions below:

install.packages("devtools")
library(devtools)
install_github("hadley/readr")
install_github("ropenscilabs/jsonvalidate")
install_github("ropenscilabs/datapkg")

#Load client
library(datapkg)

#Get Data Package
datapackage <- datapkg_read("https://pkgstore.datahub.io/core/continent-codes/latest")

#Package info
print(datapackage)

#Open actual data in RStudio Viewer
View(datapackage$data$"continent-codes")
View(datapackage$data$"continent-codes_zip")

Tested with Python 3.5.2

To generate Pandas data frames based on JSON Table Schema descriptors we have to install jsontableschema-pandas plugin. To load resources from a data package as Pandas data frames use datapackage.push_datapackage function. Storage works as a container for Pandas data frames.

In order to work with Data Packages in Pandas you need to install our packages:

$ pip install datapackage
$ pip install jsontableschema-pandas

To get Data Package run following code:

import datapackage

data_url = "https://pkgstore.datahub.io/core/continent-codes/latest/datapackage.json"

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'pandas')

# to see datasets in this package
storage.buckets

# you can access datasets inside storage, e.g. the first one:
storage[storage.buckets[0]]

In order to work with Data Packages in Python you need to install our packages:

$ pip install datapackage

To get Data Package into your Python environment, run following code:

import datapackage

dp = datapackage.DataPackage('https://pkgstore.datahub.io/core/continent-codes/latest/datapackage.json')

# see metadata
print(dp.descriptor)

# get list of csv files
csvList = [dp.resources[x].descriptor['name'] for x in range(0,len(dp.resources))]
print(csvList) # ["resource name", ...]

# access csv file by the index starting 0
print(dp.resources[0].data)

To use this dataset in JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use code snippet below:

  const {Dataset} = require('data.js')

  const path = 'https://pkgstore.datahub.io/core/continent-codes/latest/datapackage.json'

  const dataset = Dataset.load(path)

  // get a data file in this dataset
  const file = dataset.resources[0]
  const data = file.stream()

In order to work with Data Packages in SQL you need to install our packages:

$ pip install datapackage
$ pip install jsontableschema-sql
$ pip install sqlalchemy

To import Data Package to your SQLite Database, run following code:

import datapackage
from sqlalchemy import create_engine

data_url = 'https://pkgstore.datahub.io/core/continent-codes/latest/datapackage.json'
engine = create_engine('sqlite:///:memory:')

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'sql', engine=engine)

# to see datasets in this package
storage.buckets

# to execute sql command (assuming data is in "data" folder, name of resource is data and file name is data.csv)
storage._Storage__connection.execute('select * from data__data___data limit 1;').fetchall()

# description of the table columns
storage.describe('data__data___data')
Datapackage.json