Files Size Format Created Updated License Source
2 14kB zip xlsx 1 month ago
Download

Data Files

File Description Size Last changed Download Other formats
sample-pink-dragon-77_zip [zip] Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 3kB sample-pink-dragon-77_zip [zip]
sample [xlsx] 4kB sample [xlsx]

sample-pink-dragon-77_zip  

This is a preview version. There might be more data in the original version.

sample  

This is a preview version. There might be more data in the original version.

Read me

Import into your tool

In order to use Data Package in R follow instructions below:

install.packages("devtools")
library(devtools)
install_github("hadley/readr")
install_github("ropenscilabs/jsonvalidate")
install_github("ropenscilabs/datapkg")

#Load client
library(datapkg)

#Get Data Package
datapackage <- datapkg_read("https://pkgstore.datahub.io/90998f7f90e086bd5fc7c9075dfda43b/sample-pink-dragon-77/latest")

#Package info
print(datapackage)

#Open actual data in RStudio Viewer
View(datapackage$data$"sample-pink-dragon-77_zip")
View(datapackage$data$"sample")

Tested with Python 3.5.2

To generate Pandas data frames based on JSON Table Schema descriptors we have to install jsontableschema-pandas plugin. To load resources from a data package as Pandas data frames use datapackage.push_datapackage function. Storage works as a container for Pandas data frames.

In order to work with Data Packages in Pandas you need to install our packages:

$ pip install datapackage
$ pip install jsontableschema-pandas

To get Data Package run following code:

import datapackage

data_url = "https://pkgstore.datahub.io/90998f7f90e086bd5fc7c9075dfda43b/sample-pink-dragon-77/latest/datapackage.json"

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'pandas')

# to see datasets in this package
storage.buckets

# you can access datasets inside storage, e.g. the first one:
storage[storage.buckets[0]]

In order to work with Data Packages in Python you need to install our packages:

$ pip install datapackage

To get Data Package into your Python environment, run following code:

import datapackage

dp = datapackage.DataPackage('https://pkgstore.datahub.io/90998f7f90e086bd5fc7c9075dfda43b/sample-pink-dragon-77/latest/datapackage.json')

# see metadata
print(dp.descriptor)

# get list of csv files
csvList = [dp.resources[x].descriptor['name'] for x in range(0,len(dp.resources))]
print(csvList) # ["resource name", ...]

# access csv file by the index starting 0
print(dp.resources[0].data)

To use this dataset in JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use code snippet below:

  const {Dataset} = require('data.js')

  const path = 'https://pkgstore.datahub.io/90998f7f90e086bd5fc7c9075dfda43b/sample-pink-dragon-77/latest/datapackage.json'

  const dataset = Dataset.load(path)

  // get a data file in this dataset
  const file = dataset.resources[0]
  const data = file.stream()

In order to work with Data Packages in SQL you need to install our packages:

$ pip install datapackage
$ pip install jsontableschema-sql
$ pip install sqlalchemy

To import Data Package to your SQLite Database, run following code:

import datapackage
from sqlalchemy import create_engine

data_url = 'https://pkgstore.datahub.io/90998f7f90e086bd5fc7c9075dfda43b/sample-pink-dragon-77/latest/datapackage.json'
engine = create_engine('sqlite:///:memory:')

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'sql', engine=engine)

# to see datasets in this package
storage.buckets

# to execute sql command (assuming data is in "data" folder, name of resource is data and file name is data.csv)
storage._Storage__connection.execute('select * from data__data___data limit 1;').fetchall()

# description of the table columns
storage.describe('data__data___data')
Datapackage.json