ISO Language Codes (639-1 and 693-2) and IETF Language Types

core

Files Size Format Created Updated License Source
5 172kB csv zip 6 months ago 2 months ago public-domain-dedication-and-license Library of Congress Unicode
Comprehensive language code information, consisting of ISO 639-1, ISO 639-2 and IETF language types. Data Data is taken from the Library of Congress as the ISO 639-2 Registration Authority, and from the Unicode Common Locale Data Repository. data/language-codes.csv This file contains the 184 read more
Download

Data Files

File Description Size Last changed Download
language-codes 3kB csv (3kB) , json (8kB)
language-codes-3b2 3kB csv (3kB) , json (11kB)
language-codes-full 16kB csv (16kB) , json (51kB)
ietf-language-tags 22kB csv (22kB) , json (88kB)
language-codes_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 63kB zip (63kB)

language-codes  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
alpha2 1 string 2 letter alpha-2 code
English 2 string English name of language

language-codes-3b2  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
alpha3-b 1 string 3 letter alpha-3 bibliographic code
alpha2 2 string 2 letter alpha-2 code
English 3 string English name of language

language-codes-full  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
alpha3-b 1 string 3 letter alpha-3 bibliographic code
alpha3-t 2 string 3 letter alpha-3 terminologic code (when given)
alpha2 3 string 2 letter alpha-2 code (when given)
English 4 string English name of language
French 5 string French name of language

ietf-language-tags  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
lang 1 string IANA/Unicode language-tag-extension
langType 2 string ISO 2 letter alpha-2 language code
territory 3 string ISO3166-1-Alpha-2 country code
revGenDate 4 string revision date (format ISO data)
defs 5 integer number of definitions
dftLang 6 boolean indicate the default-language, as unicode-cldr
file 7 string file-name of the locale descriptor

Read me

Comprehensive language code information, consisting of ISO 639-1, ISO 639-2 and IETF language types.

Data

Data is taken from the Library of Congress as the ISO 639-2 Registration Authority, and from the Unicode Common Locale Data Repository.

data/language-codes.csv

This file contains the 184 languages with ISO 639-1 (alpha 2 / two letter) codes and their English names.

data/language-codes-3b2.csv

This file contains the 184 languages with both ISO 639-2 (alpha 3 / three letter) bibliographic codes and ISO 639-1 codes, and their English names.

data/language-codes-full.csv

This file is more exhaustive.

It contains all languages with ISO 639-2 (alpha 3 / three letter) codes, the respective ISO 639-1 codes (if present), as well as the English and French name of each language.

There are two versions of the three letter codes: bibliographic and terminologic. Each language has a bibliographic code but only a few languages have terminologic codes. Terminologic codes are chosen to be similar to the corresponding ISO 639-1 two letter codes.

Example from Wikipedia:

[…] the German language (Part 1: de) has two codes in Part 2: ger (T code) and deu (B code), whereas there is only one code in Part 2, eng, for the English language.

There are four special codes: mul, und, mis, zxx; and a reserved range qaa-qtz.

data/ietf-language-tags.csv

This file lists all IETF language tags of the official resource indicated by http://www.iana.org/assignments/language-tag-extensions-registry that into the /main folder of http://www.unicode.org/Public/cldr/latest/core.zip (project cldr.unicode.org).

Preparation

This package includes a bash script to fetch current language code information and adjust the formatting. The file ietf-language-tags.csv is obtained with ietf-lanGen.php.

License

This material is licensed by its maintainers under the Public Domain Dedication and License (PDDL).

Nevertheless, it should be noted that this material is ultimately sourced from the Library of Congress as a Registration Authority for ISO and their licensing policies are somewhat unclear. As this is a short, simple database of facts, there is a strong argument that no rights can subsist in this collection.

However, if you intended to use these data in a public or commercial product, please check the original sources for any specific restrictions.

Import into your tool

Data-cli or just data is the program to get and post your data with the datahub.
Download CLI tool and use it with the datahub almost like you use git with the github:

data get https://datahub.io/core/language-codes
data info core/language-codes
tree core/language-codes
# Get a list of dataset's resources
curl -L -s https://datahub.io/core/language-codes/datapackage.json | grep path

# Get resources

curl -L https://datahub.io/core/language-codes/r/0.csv

curl -L https://datahub.io/core/language-codes/r/1.csv

curl -L https://datahub.io/core/language-codes/r/2.csv

curl -L https://datahub.io/core/language-codes/r/3.csv

curl -L https://datahub.io/core/language-codes/r/4.zip

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/core/language-codes/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/core/language-codes/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/core/language-codes/datapackage.json')

# print list of all resources:
print(package.resource_names)

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/core/language-codes/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()
Datapackage.json