Tuesday, December 30, 2008

automatic rebalancing through automatic exchange

When investing, you often want to maintain your asset allocation, which can depart from your initial allocation as specific assets increase or decrease in value. One option is to periodically check your allocation and manually transfer funds between assets (rebalance) if necessary. An added convenience is automatic rebalancing, where your investment company rebalances for you at some frequency.

However, Vanguard, one of the most popular investment companies, does not seem to support automatic rebalancing of an arbitrary asset allocation. Here's a trick you can use to achieve the same effect through automatic exchange:

Suppose you have three assets A, B, and C that you'd like to hold in the percentages a, b, and c respectively, where a + b + c = 100. All you need to do is set up 6 automatic exchanges, where A gives b percent of its value to B and c percent of its value to C, B gives a percent to A and c percent to C, and C gives a percent to A and b percent to B. In general, each asset receives a fixed percentage of each other asset, equal to the percentage of the receiving asset in the desired allocation.

The above technique rebalances exactly, but it has some problems. First, you definitely do not want to do this if there are taxes or fees associated with the transactions, since the transactions will be pretty large. (Performing the exchanges within a Roth IRA at Vanguard should still be okay.) Second, it can be a pain to set up an automatic exchange for each pair of assets you own.

Let's address both problems, though really, if you're faced with taxes or fees, you probably just want to do your rebalancing manually. However, you can decrease both the number and the size of the transactions by doing something I might call "soft rebalancing". The trick is that instead of exchanging between every pair of assets, we create a cycle of exchanges. In the above example, we could have A give to B, B give to C, and C give to A. (I'll get to the exchange amounts in a bit.) In general, if you have n assets, you only have to set up n exchanges. With soft rebalancing, you lose the ability to rebalance exactly, as we'll see.

So what are the exchange amounts? There's a slight complication, in that we can now change the rate at which rebalancing occurs, so the exchange amounts can be scaled. In the above example, A would give B proportional to 1/a, B would give C proportional to 1/b, and C would give A proportional to 1/c. In general, each asset gives up value proportional to the inverse of its percentage in the desired allocation. Again, these amounts should be scaled based on how fast you want the rebalancing to happen. At the maximum allowable rate, one of the assets will be giving up all of its value at each rebalance.

There are some problems with the cycle of exchanges scheme. Most importantly, it may take many rebalancing steps to transfer between an asset with excess value and an asset with insufficient value if the asset with insufficient value is far "downstream" from the asset with excess value. There are other schemes that address this issue, like setting up one "hub" asset and having all exchanges take place between the hub and one of the other assets (in both directions), though this scheme requires setting up about twice as many automatic exchanges.

An important point is that these exchanges must be in terms of the percent of the holdings of an asset. If the only supported exchanges are in fixed dollar amounts, automatic rebalancing is impossible. Also, I'm not sure how practical any of this is, since I have to assume that at some point Vanguard and other investment companies will offer automatic rebalancing. In the meantime though, you can use my trick if you're desperate.

resolutions

I have never made New Year's resolutions before, but this time I actually have some idea of the things I want to accomplish in the coming year:

  1. Graduate.
  2. Create an interactive labeling demo, where users can annotate static objects in images and they get attached to the "real world". I'm working some of the techniques behind this right now, and I have a crappy demo built on top of LabelMe, but I would like to have a more scalable and usable one.
  3. Work out the details of my convex formulation of visibility constraints in multiview stereo and write a paper on this.
  4. Start training MMA, probably here.

Saturday, December 13, 2008

lyrics quiz

Here's a lyrics quiz with a twist: all of the lyrics consist of syllables like "na" and "la" instead of actual words. Spacing and commas generally indicate time between syllables. The "X" indicates other words that are part of the phrase, but would give away the identity of the song. Name as many songs as you can in the comments. None of them are at all obscure, and I've included the decade in which each song was released.

1) 1980s
na na na na na
na na na na na, na na na na na, na
na na na na na, na na na na na na na, na
X

2) 1980s
na na nana na na
nana na na nanana na na

3) 1990s
nana nana nana nana na na
nana nana nana nana na na

4) 1960s
na, na na nanana na
nanana na, X

5) 1960s
na nana na nana na nana na

6) 1970s
fa fa fa fa, fa fa fa fa fa, fa

7) 1960s
na na na, na
na na na, na
X

8) 1980s
la la la lala la
la la la lala la la

9) 1970s
doo, doo doo, doo doo, doo, doo doo
doo, doo doo, doo doo, doo, doo doo

10) 2000s
la la la, la la lala la
la la la, la la lala la

Saturday, December 06, 2008

ECCV 2008

ECCV 2008 happened over a month ago, but it's not too late for me to post a summary of some of my favorite papers from the conference, as well as my own paper. Let's start with my paper:

Scene Segmentation Using the Wisdom of Crowds
Ian Simon and Steven M. Seitz
There are many cues one could use when segmenting images, such as color, edges, recognizing objects, etc. Here we ignore all of these cues and segment 3D scenes based on the distribution of photos taken at the scenes (downloaded from Flickr). The basic idea is that people do not take photos simply by pointing the camera randomly, but take pictures "of" interesting objects. We effectively treat each photo as a vote that all of the scene points appearing in this photo belong to the same object. Of course, this is not precisely true for most photographs, but by combining information from multiple photographers, we can get accurate 3D segmentations.

Here are some other papers I liked:

Learning to Localize Objects with Structured Output Regression
Matthew B. Blaschko and Christoph H. Lampert
Object localization is usually done by training a classifier on positive and negative image regions, then running this classifier in sliding-window fashion on a new image. This paper proposes training directly for the localization task using a structured SVM.

Integration of Multiview Stereo and Silhouettes via Convex Functionals on Convex Domains
Kalin Kolev and Daniel Cremers
Several previous papers have tried to combine photoconsistency and silhouettes. The key insight here is a way to express silhouette constraints over a voxel grid in a way that yields a simple convex relaxation. There's no proof of a meaningful performance guarantee relative to the optimal solution of the discrete problem, but it's still cleaner than any of the other papers I've seen that address silhouettes in multiview stereo.

Image Segmentation by Branch-and-Mincut
Victor Lempitsky, Andrew Blake, and Carsten Rother
Suppose you're trying to segment a particular object with unknown pose in an image. For fixed pose, the problem can be solved with a graph cut. This paper describes a branch-and-bound search through a tree of hierarchically-clustered poses for the optimal pose and segmentation. The important observation is that a lower bound on the quality of the optimal solution in a particular subtree can be computed with a single graph cut.

What is a Good Image Segment? A Unified Approach to Segment Extraction
Shai Bagon, Oren Boiman, and Michal Irani
This paper proposes a simple criterion for segmentation: a segment should be easily composable using its own pieces, but difficult to compose from pieces outside the segment. The algorithm implied by this criterion sacrificies speediness for elegance, but I think there is value in figuring out the right thing to optimize, even if actually optimizing it proves impractical. Of course, it's not clear that composability is the right thing, and it would be interesting to compare against human segmentations.