API Access
Access dataset files directly from scripts, code, or AI agents.
Browse dataset files
API Access
Access dataset files directly from scripts, code, or AI agents.
Each file has a stable URL (r-link) that you can use directly in scripts, apps, or AI agents. These URLs are permanent and safe to hardcode.
Start with these files — they give you everything you need to understand and access the dataset.
- 1. Fetch datapackage.json to inspect schema and resources
- 2. Download data resources listed in datapackage.json
- 3. Read README.md for full context
Data Views
Data Files
Explore with AIsector-counts
| Field | Type | Description |
|---|---|---|
| sector | string | GICS (Global Industry Classification Standard) sector name |
| count | integer | Number of S&P 500 constituent companies |
Download
Download CSVAbout
- Aggregated count of S&P 500 constituent companies by GICS sector, derived from the constituents resource. The total may exceed 500 because a small number of companies have multiple share classes listed in the index.
- Last updated
- 8 May 2026
- Total rows
- ...
- Format
- CSV
- File size
- 206 B
constituents
| Field | Type | Description |
|---|---|---|
| Symbol | string | Stock ticker symbol as listed on the exchange |
| Security | string | Company name as listed in the index |
| GICS Sector | string | GICS (Global Industry Classification Standard) sector to which the company belongs |
| GICS Sub-Industry | string | GICS sub-industry classification, a finer-grained grouping within the sector |
| Headquarters Location | string | City and state (or country) where the company's headquarters is located |
| Date added | date | Date the company was added to the S&P 500 index, in YYYY-MM-DD format |
| CIK | string | SEC Central Index Key — the unique identifier assigned by the US Securities and Exchange Commission |
| Founded | string | Year the company was founded. Some entries include a parenthetical note, e.g. '2013 (1888)', indicating the year of the current legal entity followed by the predecessor organisation's founding year |
Download
Download CSVAbout
- Full list of S&P 500 constituent companies as sourced from Wikipedia. Each row represents one listed security (ticker symbol). A small number of companies appear twice with different symbols due to multiple share classes.
- Last updated
- 8 May 2026
- Total rows
- ...
- Format
- CSV
- File size
- 53.6 kB
About this dataset
S&P 500 Companies Dataset
List of companies in the S&P 500 (Standard and Poor's 500). The S&P 500 is a free-float, capitalization-weighted index of the top 500 publicly listed stocks in the US (top 500 by market cap). The dataset includes a list of all the stocks contained therein.
Data
Information on S&P 500 index used to be available on the official webpage on the Standard and Poor's website but until they publish it back, Wikipedia's List of S&P 500 companies is the best up-to-date and open data source.
The Founded field contains mixed-format values for some companies — e.g. 2013 (1888) — where the first year is the current legal entity's founding date and the parenthetical year is the predecessor organisation's founding date.
Sources
Detailed information on the S&P 500 (primarily in XLS format) used to be obtained from its official webpage on the Standard and Poor's website - it was free but registration was required.
Note
For aggregate information on the S&P (dividends, earnings, etc.) see Standard and Poor's 500 Dataset on GitHub.
General Financial Notes
Publicly listed US companies are obliged various reports on a regular basis with the SEC. Of these 2 types are of especial interest to investors and others interested in their finances and business. These are:
- 10-K = Annual Report
- 10-Q = Quarterly report
Development
The pipeline relies on Python, so you'll need to have it installed on your machine. Then:
- Create a virtual environment in a directory using Python's venv module:
python3 -m venv .env - Activate the virtual environment:
source .env/bin/activate - Install the dependencies:
pip install -r scripts/requirements.txt - Run the scripts:
python scripts/scrape.py
Alternatively, you can use the provided Makefile to run the scraping with a simple make. It'll create a virtual environment, install the dependencies and run the script.
License
All data is licensed under the Open Data Commons Public Domain Dedication and License. All code is licensed under the MIT/BSD license.
Note that while no credit is formally required a link back or credit to Rufus Pollock and the Open Knowledge Foundation is much appreciated.