API AccessAccess dataset files directly from scripts, code, or AI agents.
Browse dataset files
Access dataset files directly from scripts, code, or AI agents.
Dataset Files
Each file has a stable URL (r-link) that you can use directly in scripts, apps, or AI agents. These URLs are permanent and safe to hardcode.
/core/s-and-p-500-companies/
https://datahub.io/core/s-and-p-500-companies/_r/-/.devcontainer/devcontainer.json
https://datahub.io/core/s-and-p-500-companies/_r/-/.gitignore
├ Makefile
https://datahub.io/core/s-and-p-500-companies/_r/-/Makefile
https://datahub.io/core/s-and-p-500-companies/_r/-/README.md
https://datahub.io/core/s-and-p-500-companies/_r/-/UPDATE_SCRIPT_MAINTENANCE_REPORT.md
https://datahub.io/core/s-and-p-500-companies/_r/-/data/constituents.csv
https://datahub.io/core/s-and-p-500-companies/_r/-/data/sector-counts.csv
https://datahub.io/core/s-and-p-500-companies/_r/-/datapackage.json
https://datahub.io/core/s-and-p-500-companies/_r/-/datapackage.yaml
Key Files
Start with these files — they give you everything you need to understand and access the dataset.
datapackage.json— metadata & schema
https://datahub.io/core/s-and-p-500-companies/_r/-/datapackage.json
README.md— documentation
https://datahub.io/core/s-and-p-500-companies/_r/-/README.md
Typical Usage
- 1. Fetch datapackage.json to inspect schema and resources
- 2. Download data resources listed in datapackage.json
- 3. Read README.md for full context
Data Views
Data Previews
sector-counts
Loading data...
Schema
| name | type | description |
|---|---|---|
| sector | string | |
| count | integer | Number of S&P 500 constituent companies |
constituents
Loading data...
Schema
| name | type |
|---|---|
| Symbol | string |
| Security | string |
| GICS Sector | string |
| GICS Sub-Industry | string |
| Headquarters Location | string |
| Date added | string |
| CIK | string |
| Founded | string |
Data Files
| File | Description | Size | Last modified | Download |
|---|---|---|---|---|
sector-counts | 218 B | about 1 month ago | sector-counts | |
constituents | 53.6 kB | 10 days ago | constituents |
| Files | Size | Format | Created | Updated | License | Source |
|---|---|---|---|---|---|---|
| 2 | 53.9 kB | csv | 10 days ago | Open Data Commons Public Domain Dedication and License v1.0 |
Update Script Maintenance Report
Date: 2026-03-04
- Root cause: Wikipedia fetch via direct
pandas.read_html(url)hit HTTP 403 in automation. - Fixes made: switched scraper to
requestswith explicit User-Agent and parsed HTML from response content; modernized workflow with explicit write permission. - Validation: local run reproduces and addresses the fetch path issue; workflow now commits only when output changes.
- Known blockers: none identified in this remediation pass.