Standard and Poors 500 Companies List With Financial Information

JohnSnowLabs

Files Size Format Created Updated License Source
2 513kB csv zip 3 months ago John Snow Labs Standard License John Snow Labs S&P Dow Jones Indices
Download

Data Files

File Description Size Last changed Download
standard-and-poors-500-companies-list-with-financial-information-csv 80kB csv (80kB) , json (219kB)
standard-and-poors-500-companies-list-with-financial-information_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 83kB zip (83kB)

standard-and-poors-500-companies-list-with-financial-information-csv  

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Company_Symbol 1 string Company symbol representing company's name
Company_Name 2 string Name of the company
GICS_Sector 3 string Sector of the company according to GICS (Global Industry Classification Standard)
Price 4 number Price of asset
Dividend_Yield 5 number Divident yield is a financial ratio that indicates how much a company pays out in dividends each year relative to its share price. Dividend yield is represented as a percentage and can be calculated by dividing the dollar value of dividends paid in a given year per share of stock held by the dollar value of one share of stock.
Price_To_Earnings_Ratio 6 number The price-to-earnings ratio, or P/E is the ratio of the market price of a company’s stock to its earnings per share (EPS)
Earnings_Per_Share 7 number Earnings per share (EPS) is the portion of a company's profit allocated to each outstanding share of common stock. Earnings per share serves as an indicator of a company's profitability.
Book_Value 8 number Book value of an asset is the value at which the asset is carried on a balance sheet and calculated by taking the cost of an asset minus the accumulated depreciation. Book value is also the net asset value of a company, calculated as total assets minus intangible assets (patents, goodwill) and liabilities. For the initial outlay of an investment, book value may be net or gross of expenses such as trading costs, sales taxes, service charges and so on.
Week_52_Low 9 number A 52-week low is the lowest price that a stock has traded at during the previous year. Many traders and investors view the 52-week high or low as an important factor in determining a stock's current value and predicting future price movement. As a stock trades within its 52-week price range (the range that exists between the 52-week low and the 52-week high), investors may show increased interest as price nears either the high or the low.
Week_52_High 10 number A 52-week high is the highest price that a stock has traded at during the previous year. Many traders and investors view the 52-week high or low as an important factor in determining a stock's current value and predicting future price movement. As a stock trades within its 52-week price range (the range that exists between the 52-week low and the 52-week high), investors may show increased interest as price nears either the high or the low.
Market_Cap 11 number Market capitalization refers the total dollar market value of a company's outstanding shares. Commonly referred to as "market cap," it is calculated by multiplying a company's shares outstanding by the current market price of one share.
EBITDA 12 number EBITDA stands for earnings before interest, taxes, depreciation and amortization. EBITDA is one indicator of a company's financial performance and is used as a proxy for the earning potential of a business.
Price_To_Sales_Ratio 13 number A valuation ratio that compares a company’s stock price to its revenues. The price-to-sales ratio is an indicator of the value placed on each dollar of a company’s sales or revenues. It can be calculated either by dividing the company’s market capitalization by its total sales over a 12-month period, or on a per-share basis by dividing the stock price by sales per share for a 12-month period. Like all ratios, the price-to-sales ratio is most relevant when used to compare companies in the same sector.
Price_To_Book_Ratio 14 number The price-to-book ratio (P/B Ratio) is a ratio used to compare a stock's market value to its book value. It is calculated by dividing the current closing price of the stock by the latest quarter's book value per share.
SEC_Filings 15 string A SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.

Import into your tool

Data-cli or just data is the program to get and post your data with the datahub.
Use data with the datahub.io almost like you use git with the github. Here are installation instructions.

data get https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information
tree JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information
# Get a list of dataset's resources
curl -L -s https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/datapackage.json | grep path

# Get resources

curl -L https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/r/0.csv

curl -L https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/r/1.zip

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/datapackage.json')

# print list of all resources:
print(package.resource_names)

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/JohnSnowLabs/standard-and-poors-500-companies-list-with-financial-information/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()
Datapackage.json