Austin Watershed Reach Index and Problem Scores

JohnSnowLabs

Files Size Format Created Updated License Source
2 174kB csv zip 1 week ago John Snow Labs Standard License John Snow Labs Data City of Austin
Download

Data Files

austin-watershed-reach-index-and-problem-scores-csv  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Watershed_ID 1 integer A Unique watershed Identification Number
Watershed_Name 2 string Official Watershed Name
Integrity_Score_ID 3 integer Primary Integer Key for this dataset
Watershed_Reach 4 string 1 is most downstream reach and ascends for subsequent upstream reaches in the same watershed.
Year_of_Observation 5 date (%Y-%m-%d) The City of Austin Fiscal year the data points are associated with. FY 2014 started on 01-OCT-2013 and ended 30-SEP-2014.
Index_Phase 6 integer For streams only - one of two phases. Lakes all collected in same phase.
Index_Source_Type 7 string EII - Environmental Integrity Index or ALI - Austin Lakes Index
Overall_Score 8 integer Overall Index Score. 100 = best condition. Average of other index scores. Problem scores not included.
Aquatic_Life 9 integer Aquatic Life Index score. 100 = best condition. (Bugs, diatoms abundance, diversity, pollution tolerance and other metrics)
Contact_Recreation 10 integer Contact Recreation Index score (for creeks only). 100 = best condition. (bacteria)
Eutrophication 11 integer Eutrophication Index score (for lakes only). 100 = best condition. In general, lower chlorophyll-a abundance and lower proportion of blue-green algae lead to a higher score to represent a superior trophic condition.
Habitat 12 integer Habitat Index score. 100 = best condition. Instream cover and substrate niches.
Non_Contact_Recreation 13 integer Non Contact Recreation Index score (creeks only). 100 = best condition. Aesthetics, odor, safety.
Sediment 14 integer Sediment Index score. 100 = best condition. Metals, pcbs, pesticides average.
Vegetation 15 integer Vegetation Index score (for lakes only). 100 = best condition
Water_Quality 16 integer Water Quality Index score. 100 = best condition. (nutrients, temp, tss)
Animal_Waste_Problem 17 integer Animal Problem score (creeks only). 100 = worst condition. Pet waste.
Construction_Runoff_Problem 18 integer Construction TSS Problem score (creeks only). 100 = worst condition. Erosion and Sedimentation controls failure.
Fertilizer_Problem 19 integer Fertilizer problem score (creeks only). 100 = worst condition. Nitrate.
Litter_Problem 20 integer Litter problem score (creeks only). 100 = worst condition. Trash
Riparian_Vegetation_Problem 21 integer Riparian Vegetation problem score (creeks only). 100 = worst condition. Not enough riparian cover.
Sediment_Problem 22 integer Sediment problem score (creeks only). 100 = worst condition. Worst of the problems set score.
Sewage_Problem 23 integer Sewage problem score (creeks only). 100 = worst condition. Water Quality problem caused by sewage.
Stability_Problem 24 integer Stability problem score (creeks only). 100 = worst condition. Stream bank failures.
Water_Quality_Problem 25 integer Water Quality problem score (creeks only). 100 = worst condition. Water quality worst case.
Created_Date 26 date (%Y-%m-%d) Date when Record was Created
Modified_Date 27 date (%Y-%m-%d) Date when Record was Modified

austin-watershed-reach-index-and-problem-scores_zip  

This is a preview version. There might be more data in the original version.

Read me

Import into your tool

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite")
library("jsonlite")

json_file <- "http://datahub.io/JohnSnowLabs/austin-watershed-reach-index-and-problem-scores/datapackage.json"
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# access csv file by the index starting from 1
path_to_file = json_data$resources$path[1][1]
data <- read.csv(url(path_to_file))
print(data)

In order to work with Data Packages in Pandas you need to install the Frictionless Data data package library and the pandas extension:

pip install datapackage
pip install jsontableschema-pandas

To get the data run following code:

import datapackage

data_url = "http://datahub.io/JohnSnowLabs/austin-watershed-reach-index-and-problem-scores/datapackage.json"

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'pandas')

# data frames available (corresponding to data files in original dataset)
storage.buckets

# you can access datasets inside storage, e.g. the first one:
storage[storage.buckets[0]]

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('http://datahub.io/JohnSnowLabs/austin-watershed-reach-index-and-problem-scores/datapackage.json')

# get list of resources:
resources = package.descriptor['resources']
resourceList = [resources[x]['name'] for x in range(0, len(resources))]
print(resourceList)

data = package.resources[0].read()
print(data)

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'http://datahub.io/JohnSnowLabs/austin-watershed-reach-index-and-problem-scores/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
(async () => {
  const dataset = await Dataset.load(path)

  // Get the first data file in this dataset
  const file = dataset.resources[0]
  // Get a raw stream
  const stream = await file.stream()
  // entire file as a buffer (be careful with large files!)
  const buffer = await file.buffer
})()

Install the datapackage library created specially for Ruby language using gem:

gem install datapackage

Now get the dataset and read the data:

require 'datapackage'

path = 'http://datahub.io/JohnSnowLabs/austin-watershed-reach-index-and-problem-scores/datapackage.json'

package = DataPackage::Package.new(path)
# So package variable contains metadata. You can see it:
puts package

# Read data itself:
resource = package.resources[0]
data = resource.read
puts data
Datapackage.json