UK Properties For Sale By Ministry Of Defense 2017


Files Size Format Created Updated License Source
2 162kB csv zip 3 months ago John Snow Labs Standard License John Snow Labs Ministry of Defence, UK

Data Files

File Description Size Last changed Download
uk-properties-for-sale-by-ministry-of-defense-2017-csv 28kB csv (28kB) , json (82kB)
uk-properties-for-sale-by-ministry-of-defense-2017_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 25kB zip (25kB)


This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Original_ID 1 integer The ID which is associated in the original file with a specific property
Disposal_Status 2 string Status of disposal one the report was published
Forecast_FY 3 date (%Y-%m-%d) The year part of a financial year(FY) which represent the dead line for disposal process
Primary_Establishment_Name 4 string The name of total area (used by MOD) the property (parcel) is beloging to
Primary_Parcel_Name 5 string The name of property for sale (used by MOD)
Address 6 string The address of the property planned for disposal
Town 7 string The town on which territory the property is located
County 8 string The country where the property is located
Country 9 string The UK country where the property is located
Total_Area_Size_In_Hectares 10 number The property size in hectares (10,000 m2)
Housing_Unit_Potential 11 integer The estimated housing capacity
Constituency 12 string The electoral area/division where the property is located

Import into your tool

Data-cli or just data is the program to get and post your data with the datahub.
Use data with the almost like you use git with the github. Here are installation instructions.

data get
tree JohnSnowLabs/uk-properties-for-sale-by-ministry-of-defense-2017
# Get a list of dataset's resources
curl -L -s | grep path

# Get resources

curl -L

curl -L

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="")

json_file <- ''
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = ''

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('')

# print list of all resources:

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = ''

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data