US Investor Flow of Funds into Investment Classes (Bonds, Equities etc)

core

Files Size Format Created Updated License Source
2 35kB csv PDDL-1.0 Investment Company Institute (ICI)
Monthly net new cash flow by US investors into various mutual fund investment classes (equities, bonds etc). Statistics come from the Investment Company Institute (ICI). Data Data comes from the data provided on the ICI Statistics pages, in particular: Summary: Estimated Long-Term Mutual Fund read more
Download

Data Files

monthly [csv] All figures are in millions of USD 6kB Download [csv] - [ json (19kB) ]
weekly [csv] All figures are in millions of USD 2kB Download [csv] - [ json (8kB) ]

monthly  

Field information

Field Name Order Type (Format) Description
Date 1 date (fmt:%Y-%m-%d)
Total Equity 2 integer
Domestic Equity 3 integer
World Equity 4 integer
Hybrid 5 integer
Total Bond 6 integer
Taxable Bond 7 integer
Municipal Bond 8 integer
Total 9 integer

weekly  

Field information

Field Name Order Type (Format) Description
Date 1 date (fmt:%Y-%m-%d)
Total Equity 2 integer
Domestic Equity 3 integer
World Equity 4 integer
Hybrid 5 integer
Total Bond 6 integer
Taxable Bond 7 integer
Municipal Bond 8 integer
Total 9 integer

Read me

Monthly net new cash flow by US investors into various mutual fund investment classes (equities, bonds etc). Statistics come from the Investment Company Institute (ICI).

Data

Data comes from the data provided on the ICI Statistics pages, in particular:

  • Summary: Estimated Long-Term Mutual Fund Flows Data (xls)

Notes for Long-Term Mutual Fund Flows Data:

  • All figures are (nominal) millions of US dollars (USD)
  • Weekly cash flows are estimates based on reporting covering 98 percent of industry assets, while monthly flows are actual numbers as reported in ICI's "Trends in Mutual Fund Investing."

Preparation

Run the python script:

Install the requirements

pip install -r scripts/requirements.txt

Now run the script

python scripts/process.py

Import into your tool

In order to use Data Package in R follow instructions below:

install.packages("devtools")
library(devtools)
install_github("hadley/readr")
install_github("ropenscilabs/jsonvalidate")
install_github("ropenscilabs/datapkg")

#Load client
library(datapkg)

#Get Data Package
datapackage <- datapkg_read("https://pkgstore.datahub.io/core/investor-flow-of-funds-us/latest")

#Package info
print(datapackage)

#Open actual data in RStudio Viewer
View(datapackage$data$"monthly")
View(datapackage$data$"weekly")

Tested with Python 3.5.2

To generate Pandas data frames based on JSON Table Schema descriptors we have to install jsontableschema-pandas plugin. To load resources from a data package as Pandas data frames use datapackage.push_datapackage function. Storage works as a container for Pandas data frames.

In order to work with Data Packages in Pandas you need to install our packages:

$ pip install datapackage
$ pip install jsontableschema-pandas

To get Data Package run following code:

import datapackage

data_url = "https://pkgstore.datahub.io/core/investor-flow-of-funds-us/latest/datapackage.json"

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'pandas')

# to see datasets in this package
storage.buckets

# you can access datasets inside storage, e.g. the first one:
storage[storage.buckets[0]]

In order to work with Data Packages in Python you need to install our packages:

$ pip install datapackage

To get Data Package into your Python environment, run following code:

import datapackage

dp = datapackage.DataPackage('https://pkgstore.datahub.io/core/investor-flow-of-funds-us/latest/datapackage.json')

# see metadata
print(dp.descriptor)

# get list of csv files
csvList = [dp.resources[x].descriptor['name'] for x in range(0,len(dp.resources))]
print(csvList) # ["resource name", ...]

# access csv file by the index starting 0
print(dp.resources[0].data)

To use this Data Package in JavaScript, please, follow instructions below:

Install datapackage using npm:

$ npm install [email protected]

Once the package is installed, use code snippet below


const Datapackage = require('datapackage').Datapackage

async function fetchDataPackageAndData(dataPackageIdentifier) {
  const dp = await new Datapackage(dataPackageIdentifier)
  await Promise.all(dp.resources.map(async (resource) => {
    if (resource.descriptor.format === 'geojson') {
      const baseUrl = resource._basePath.replace('/datapackage.json', '')
      const resourceUrl = `${baseUrl}/${resource._descriptor.path}`
      const response = await fetch(resourceUrl)
      resource.descriptor._values = await response.json()
    } else {
      // we assume resource is tabular for now ...
      const table = await resource.table
      // rows are simple arrays -- we can convert to objects elsewhere as needed
      const rowsAsObjects = false
      resource.descriptor._values = await table.read(rowsAsObjects)
    }
  }))

  // see the data package object
  console.dir(dp)

  // data itself is stored in Resource object, e.g. to access first resource:
  console.log(dp.resources[0]._values)

  return dp
}


fetchDataPackageAndData('https://pkgstore.datahub.io/core/investor-flow-of-funds-us/latest/datapackage.json');

Our JavaScript is written using ES6 features. We are using node.js v7.4.0 and passing --harmony option to enable ES6:

$ node --harmony index.js

In order to work with Data Packages in SQL you need to install our packages:

$ pip install datapackage
$ pip install jsontableschema-sql
$ pip install sqlalchemy

To import Data Package to your SQLite Database, run following code:

import datapackage
from sqlalchemy import create_engine

data_url = 'https://pkgstore.datahub.io/core/investor-flow-of-funds-us/latest/datapackage.json'
engine = create_engine('sqlite:///:memory:')

# to load Data Package into storage
storage = datapackage.push_datapackage(data_url, 'sql', engine=engine)

# to see datasets in this package
storage.buckets

# to execute sql command (assuming data is in "data" folder, name of resource is data and file name is data.csv)
storage._Storage__connection.execute('select * from data__data___data limit 1;').fetchall()

# description of the table columns
storage.describe('data__data___data')