Now you can request additional data and/or customized columns!

Try It Now!

Mozilla4

machine-learning

Files Size Format Created Updated License Source
3 946kB arff csv zip 10 months ago 10 months ago Open Data Commons Public Domain Dedication and License
The resources for this dataset can be found at https://www.openml.org/d/1046 Author: Source: Unknown - Date unknown Please cite: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This is a PROMISE Software Engineering Repository data set made publicly available in order to read more
Download Developers

Data Files

Download files in this dataset

File Description Size Last changed Download
mozilla4_arff 385kB arff (385kB)
mozilla4 395kB csv (395kB) , json (1MB)
mozilla4_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 504kB zip (504kB)

mozilla4_arff  

Signup to Premium Service for additional or customised data - Get Started

This is a preview version. There might be more data in the original version.

mozilla4  

Signup to Premium Service for additional or customised data - Get Started

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
id 1 number (default)
start 2 number (default)
end 3 number (default)
event 4 number (default)
size 5 number (default)
state 6 number (default)

Integrate this dataset into your favourite tool

Use our data-cli tool designed for data wranglers:

data get https://datahub.io/machine-learning/mozilla4
data info machine-learning/mozilla4
tree machine-learning/mozilla4
# Get a list of dataset's resources
curl -L -s https://datahub.io/machine-learning/mozilla4/datapackage.json | grep path

# Get resources

curl -L https://datahub.io/machine-learning/mozilla4/r/0.arff

curl -L https://datahub.io/machine-learning/mozilla4/r/1.csv

curl -L https://datahub.io/machine-learning/mozilla4/r/2.zip

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/machine-learning/mozilla4/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/machine-learning/mozilla4/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/machine-learning/mozilla4/datapackage.json')

# print list of all resources:
print(package.resource_names)

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/machine-learning/mozilla4/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()

Read me

The resources for this dataset can be found at https://www.openml.org/d/1046

Author:
Source: Unknown - Date unknown
Please cite:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This is a PROMISE Software Engineering Repository data set made publicly available in order to encourage repeatable, verifiable, refutable, and/or improvable predictive models of software engineering.

If you publish material based on PROMISE data sets then, please follow the acknowledgment guidelines posted on the PROMISE repository web page http://promisedata.org/repository . %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% © 2007 A. Gunes Koru Contact: gkoru AT umbc DOT edu Phone: +1 (410) 455 8843 This data set is distributed under the Creative Commons Attribution-Share Alike 3.0 License http://creativecommons.org/licenses/by-sa/3.0/

You are free:

  • to Share – copy, distribute and transmit the work
  • to Remix – to adapt the work

Under the following conditions:

Attribution. You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).

Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license.

  • For any reuse or distribution, you must make clear to others the license terms of this work.
  • Any of the above conditions can be waived if you get permission from the copyright holder.
  • Apart from the remix rights granted under this license, nothing in this license impairs or restricts the author’s moral rights.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

  1. Title: Recurrent event (defect fix) and size data for Mozilla Classes This one includes a binary attribute (event) to show defect fix. The data is at the “observation” level. Each modification made to a C++ class was entered as an observation. A newly added class created an observation. The observation period was between May 29, 2002 and Feb 22, 2006.

  2. Sources (a) Creator: A. Gunes Koru (b) Date: February 23, 2007 © Contact: gkoru AT umbc DOT edu Phone: +1 (410) 455 8843

  3. Donor: A. Gunes Koru

  4. Past Usage: This data set was used for:

A. Gunes Koru, Dongsong Zhang, and Hongfang Liu, “Modeling the Effect of Size on Defect Proneness for Open-Source Software”, Predictive Models in Software Engineering Workshop, PROMISE 2007, May 20th 2007, Minneapolis, Minnesota, US.

Abstract: Quality is becoming increasingly important with the continuous adoption of open-source software. Previous research has found that there is generally a positive relationship between module size and defect proneness. Therefore, in open-source software development, it is important to monitor module size and understand its impact on defect proneness. However, traditional approaches to quality modeling, which measure specific system snapshots and obtain future defect counts, are not well suited because open-source modules usually evolve and their size changes over time. In this study, we used Cox proportional hazards modeling with recurrent events to study the effect of class size on defect-proneness in the Mozilla product. We found that the effect of size was significant, and we quantified this effect on defect proneness.

The full paper can be downloaded from A. Gunes Koru’s Website http://umbc.edu/~gkoru by following the Publications link or from the Web site of PROMISE 2007.

  1. Features:

This data set is used to create a conditional Cox Proportional Hazards Model

id: A numeric identification assigned to each separate C++ class (Note that the id’s do not increment from the first to the last data row)

start: A time infinitesimally greater than the time of the modification that created this observation (practically, modification time). When a class is introduced to a system, a new observation is entered with start=0

end: Either the time of the next modification, or the end of the observation period, or the time of deletion, whichever comes first.

event: event is set to 1 if a defect fix takes place at the time represented by ‘end’, or 0 otherwise. A class deletion is handled easily by entering a final observation whose event is set to 1 if the class is deleted for corrective maintenance, or 0 otherwise.

size: It is a time-dependent covariate and its column carries the number of source Lines of Code of the C++ classes at time ‘start’. Blank and comment lines are not counted.

state: Initially set to 0, and it becomes 1 after the class experiences an event, and remains at 1 thereafter.

Datapackage.json

Request Customized Data


Notifications of data updates and schema changes

Warranty / guaranteed updates

Workflow integration (e.g. Python packages, NPM packages)

Customized data (e.g. you need different or additional data)

Or suggest your own feature from the link below