Medicine dispensing license holders pharmacies nometa

pi

Files Size Format Created Updated License Source
2 953kB csv zip 6 months ago 6 months ago
Download Developers

Data Files

Download files in this dataset

File Description Size Last changed Download
ettm-3r36 148kB csv (148kB) , json (473kB)
medicine-dispensing-license-holders-pharmacies-nometa_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 133kB zip (133kB)

ettm-3r36  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
surname 1 string (default)
initials 2 string (default)
designation 3 string (default)
council_reg_nr 4 string (default)
premises_address_1 5 string (default)
premises_address_2 6 string (default)
premises_address_3 7 string (default)
premises_address_town 8 string (default)
premises_address_code 9 integer (default)
status_of_new_application 10 string (default)
commence_date_of_new_licence 11 string (default)
commence_date_expiry 12 string (default)
licence_number 13 string (default)
licence_type 14 string (default)

Integrate this dataset into your favourite tool

Use our data-cli tool designed for data wranglers:

data get https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa
data info pi/medicine-dispensing-license-holders-pharmacies-nometa
tree pi/medicine-dispensing-license-holders-pharmacies-nometa
# Get a list of dataset's resources
curl -L -s https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/datapackage.json | grep path

# Get resources

curl -L https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/r/0.csv

curl -L https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/r/1.zip

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/datapackage.json')

# print list of all resources:
print(package.resource_names)

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/pi/medicine-dispensing-license-holders-pharmacies-nometa/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()
Datapackage.json