SMDG Master Terminal Facilities List


Files Size Format Created Updated License Source
2 503kB csv zip 1 week ago John Snow Labs Standard License John Snow Labs SMDG

Data Files

File Description Size Last changed Download Other formats
smdg-master-terminal-facilities-list-csv [csv] 86kB smdg-master-terminal-facilities-list-csv [csv] smdg-master-terminal-facilities-list-csv [json] (310kB)
smdg-master-terminal-facilities-list_zip [zip] Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 81kB smdg-master-terminal-facilities-list_zip [zip]


This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
UN_LOCODE 1 string Main location UN/LOCODE. In an UN/EDIFACT message, used in a LOC segment, element C517.3225
Alternative_UN_LOCODE 2 string Alternative Main location UN/LOCODE. In an UN/EDIFACT message, used in a LOC segment, element C517.3225
Terminal_Code 3 string In an UN/EDIFACT message, used in a LOC segment, element C519.3223
Terminal_Facility 4 string Terminal where facility is provided.
Company_Name 5 string Name of company.
Last_Change 6 string Last change of this entry.
Valid_From 7 date (%Y-%m-%d) Entry is valid from this date.
Valid_Before 8 date (%Y-%m-%d) Entry is valid till this date.
Applicant_Name 9 string Name of applicant.
Applicant_Email 10 string Email address of applicant.
Latitude 11 number Latitude for the facility code.
Longitude 12 number Longitude for the facility code.
Terminal_Contact_Or_Website 13 string Contact number or website of terminal.


This is a preview version. There might be more data in the original version.

Read me

Import into your tool

If you are using R here's how to get the data you want quickly loaded:


json_file <- ""
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# access csv file by the index starting from 1
path_to_file = json_data$resources$path[1][1]
data <- read.csv(url(path_to_file))

In order to work with Data Packages in Pandas you need to install the Frictionless Data data package library and the pandas extension:

pip install datapackage
pip install jsontableschema-pandas

To get the data run following code:

import datapackage

data_url = ""

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'pandas')

# data frames available (corresponding to data files in original dataset)

# you can access datasets inside storage, e.g. the first one:

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('')

# get list of resources:
resources = package.descriptor['resources']
resourceList = [resources[x]['name'] for x in range(0, len(resources))]

data = package.resources[0].read()

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = ''

// We're using self-invoking function here as we want to use async-await syntax:
(async () => {
  const dataset = await Dataset.load(path)

  // Get the first data file in this dataset
  const file = dataset.resources[0]
  // Get a raw stream
  const stream = await
  // entire file as a buffer (be careful with large files!)
  const buffer = await file.buffer

Install the datapackage library created specially for Ruby language using gem:

gem install datapackage

Now get the dataset and read the data:

require 'datapackage'

path = ''

package =
# So package variable contains metadata. You can see it:
puts package

# Read data itself:
resource = package.resources[0]
data =
puts data