Indonesia Regional Elections KPU

aps2201

Files Size Format Created Updated License Source
5 550kB csv zip 1 year ago
Download Developers

Data Files

Download files in this dataset

File Description Size Last changed Download
2017-regional-election-results 51kB csv (51kB) , json (100kB)
2017-dki-second-election-round 423B csv (423B) , json (613B)
regional-election-candidates-kpu-2015 343kB csv (343kB) , json (662kB)
regional-election-eandidates-kpu-2017 134kB csv (134kB) , json (319kB)
pilkada_indonesia_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 491kB zip (491kB)

2017-regional-election-results  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
idWilayah 1 integer (default) Region of the candidate
nomorUrut 2 integer (default)
jumlahSuara 3 integer (default)
namaPemilihan 4 string (default)
namaWilayah 5 string (default)
namaPropinsi 6 string (default)
namaKd 7 string (default)
namaWkd 8 string (default)
namaKabupatenKota 9 string (default)
persenSuara 10 number (default)

2017-dki-second-election-round  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
idWilayah 1 integer (default)
nomorUrut 2 integer (default)
jumlahSuara 3 integer (default)
namaPemilihan 4 string (default)
namaWilayah 5 string (default)
namaPropinsi 6 string (default)
namaKd 7 string (default)
namaWkd 8 string (default)
persenSuara 9 number (default)

regional-election-candidates-kpu-2015  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
id 1 integer (default)
id_paslon 2 integer (default)
dapil 3 string (default)
nama 4 string (default)
kelamin 5 string (default)
pekerjaan 6 string (default)
dukungan 7 string (default)
pendukung 8 string (default)
jabatan 9 string (default)
idwilayah 10 integer (default)
tempat.lahir 11 string (default)
tanggal.lahir 12 string (default)
alamat 13 string (default)
status 14 string (default)

regional-election-eandidates-kpu-2017  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
nama 1 string (default)
gender 2 string (default)
ttl 3 string (default)
pekerjaan 4 string (default)
alamat 5 string (default)
id 6 integer (default)
jabatan 7 string (default)
namaWilayah 8 string (default)
kodeWilayah 9 integer (default)
jenisPemilihan 10 string (default)
nomorUrut 11 string (default)
jenisCalon 12 string (default)
parpolPendukung 13 string (default)
statusPenetapan 14 string (default)
urlDetailPaslon 15 string (default)
keterangan 16 string (default)
petahana 17 string (default)
partai 18 integer (default)

Integrate this dataset into your favourite tool

Use our data-cli tool designed for data wranglers:

data get https://datahub.io/aps2201/pilkada_indonesia
data info aps2201/pilkada_indonesia
tree aps2201/pilkada_indonesia
# Get a list of dataset's resources
curl -L -s https://datahub.io/aps2201/pilkada_indonesia/datapackage.json | grep path

# Get resources

curl -L https://datahub.io/aps2201/pilkada_indonesia/r/0.csv

curl -L https://datahub.io/aps2201/pilkada_indonesia/r/1.csv

curl -L https://datahub.io/aps2201/pilkada_indonesia/r/2.csv

curl -L https://datahub.io/aps2201/pilkada_indonesia/r/3.csv

curl -L https://datahub.io/aps2201/pilkada_indonesia/r/4.zip

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/aps2201/pilkada_indonesia/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/aps2201/pilkada_indonesia/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/aps2201/pilkada_indonesia/datapackage.json')

# print list of all resources:
print(package.resource_names)

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/aps2201/pilkada_indonesia/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()
Datapackage.json