S&P 500 Companies with Financial Information

185,685
0
Updated:
Files:2
Size:53.8 kB
Formats:csv
License:ODC-PDDL-1.0

List of companies in the S&P 500 (Standard and Poor's 500). The S&P 500 is a free-float, capitalization-weighted index of the top 500 publicly listed stocks in the US (top 500 by market cap). The data...

API Access

Access dataset files directly from scripts, code, or AI agents.

Browse dataset files
Dataset Files

Each file has a stable URL (r-link) that you can use directly in scripts, apps, or AI agents. These URLs are permanent and safe to hardcode.

/core/s-and-p-500-companies/
https://datahub.io/core/s-and-p-500-companies/_r/-/.devcontainer/devcontainer.json
https://datahub.io/core/s-and-p-500-companies/_r/-/.gitignore
https://datahub.io/core/s-and-p-500-companies/_r/-/Makefile
https://datahub.io/core/s-and-p-500-companies/_r/-/README.md
https://datahub.io/core/s-and-p-500-companies/_r/-/UPDATE_SCRIPT_MAINTENANCE_REPORT.md
https://datahub.io/core/s-and-p-500-companies/_r/-/data/constituents.csv
https://datahub.io/core/s-and-p-500-companies/_r/-/data/sector-counts.csv
https://datahub.io/core/s-and-p-500-companies/_r/-/datapackage.json
Key Files

Start with these files — they give you everything you need to understand and access the dataset.

datapackage.jsonmetadata & schema
https://datahub.io/core/s-and-p-500-companies/_r/-/datapackage.json
README.mddocumentation
https://datahub.io/core/s-and-p-500-companies/_r/-/README.md
Typical Usage
  1. 1. Fetch datapackage.json to inspect schema and resources
  2. 2. Download data resources listed in datapackage.json
  3. 3. Read README.md for full context

Data Views

Data Files

Explore with AI

sector-counts

Loading data...

Download

Download CSV

About

Aggregated count of S&P 500 constituent companies by GICS sector, derived from the constituents resource. The total may exceed 500 because a small number of companies have multiple share classes listed in the index.
Last updated
8 May 2026
Total rows
...
Format
CSV
File size
206 B

constituents

Loading data...

Download

Download CSV

About

Full list of S&P 500 constituent companies as sourced from Wikipedia. Each row represents one listed security (ticker symbol). A small number of companies appear twice with different symbols due to multiple share classes.
Last updated
8 May 2026
Total rows
...
Format
CSV
File size
53.6 kB

About this dataset

badge

S&P 500 Companies Dataset

List of companies in the S&P 500 (Standard and Poor's 500). The S&P 500 is a free-float, capitalization-weighted index of the top 500 publicly listed stocks in the US (top 500 by market cap). The dataset includes a list of all the stocks contained therein.

Data

Information on S&P 500 index used to be available on the official webpage on the Standard and Poor's website but until they publish it back, Wikipedia's List of S&P 500 companies is the best up-to-date and open data source.

The Founded field contains mixed-format values for some companies — e.g. 2013 (1888) — where the first year is the current legal entity's founding date and the parenthetical year is the predecessor organisation's founding date.

Sources

Detailed information on the S&P 500 (primarily in XLS format) used to be obtained from its official webpage on the Standard and Poor's website - it was free but registration was required.

Note
For aggregate information on the S&P (dividends, earnings, etc.) see Standard and Poor's 500 Dataset on GitHub.

General Financial Notes

Publicly listed US companies are obliged various reports on a regular basis with the SEC. Of these 2 types are of especial interest to investors and others interested in their finances and business. These are:

  • 10-K = Annual Report
  • 10-Q = Quarterly report

Development

The pipeline relies on Python, so you'll need to have it installed on your machine. Then:

  1. Create a virtual environment in a directory using Python's venv module: python3 -m venv .env
  2. Activate the virtual environment: source .env/bin/activate
  3. Install the dependencies: pip install -r scripts/requirements.txt
  4. Run the scripts: python scripts/scrape.py

Alternatively, you can use the provided Makefile to run the scraping with a simple make. It'll create a virtual environment, install the dependencies and run the script.

License

All data is licensed under the Open Data Commons Public Domain Dedication and License. All code is licensed under the MIT/BSD license.

Note that while no credit is formally required a link back or credit to Rufus Pollock and the Open Knowledge Foundation is much appreciated.