Wikipedia:WikiProject Wikidemia/Fundraising/Proposals

From Wikipedia, the free encyclopedia

This is a page for proposing fundraising appeals and crafting randomized evaluations to measure their impacts on donations.

Contents

[edit] Objectives

Fundraising appeals should be crafted and targetted to maximize the NPV of financial and in-kind contributions. We can potentially target the content and timing of appeals based on:

  • Whether or not a potential donor is a logged-in user;
  • Past contributions (in edits and $$);
  • Location;
  • Any known demographics or other characteristics (age, income, use of the site).

Design and timing of content can be evaluated even without being randomized. Initially, perhaps we should focus on finding the best uniform design.

Our design choices include:

  • (list pages)
  • sitenotice and anonnotice size, page placement, and content
  • Target amt?
  • emphasize mission, servers, dependence on volunteers and donations
  • include pictures


[edit] Methodology

To run a randomized evaluation, it will be necessary to vary the content served to different viewers of a page. The most effective approach would randomly vary the fundraising appeal based on either the IP of a page request or based on the timestamp of a page request. Evaluation of the effectiveness of an appeal would be based on comparisons between the frequency and the magnitude of contributions made following the variation in the appeal. Randomizations based on IP entail greater technical challenges and raise greater privacy issues, but allow higher-resolution evaluations.

Multiple unrelated fundraising appeals can be evaluated simultaneously as long as their randomizations are orthogonal.

Given the volume of Wikipedia readership (>10000 hits per second), randomizations could heavily favor the current consensus view of best practices, only modifying the pages served to, say, 1% of visitors.

The overarching scientific terms characterizing this methodology are " statistical sampling" and "experimental design."

[edit] Proposal 1: Anonnotice

After the 2005/Q4 fundraiser, there was extensive discussion of the effect of Jimbo's personal appeal on donations. Later, a notice was added to the page template for all non-logged-in visitors to Wikipedia. Debate ensued about whether this "Anonnotice" should link to the main fundraising page or to Jimbo's appeal. This proposal would test the effect on contributions of different versions of the various forms of the Anonnotice.

  1. May 28, 2006:
    Your continued donations keep Wikipedia running!    
  2. January 23, 2006:
    Please read Wikipedia founder Jimmy Wales's personal appeal.
  3. Future candidate A:
    Wikipedia needs your help: Please donate now!


Suppose we randomize by IP and there are two proposed Anonnotices besides the current one. Then all IP's would be hashed by ((create a hash specific to this Anonnotice evaluation)). Every hashed IP equivalent to 0 through 4 mod(1000) would be assigned the first proposed new Anonnotice. Every hashed IP equivalent to 5 through 9 mod(1000) would be assigned the second proposed new Anonnotice. And every other hashed IP, equivalent to 10 through 999 mod(1000), (ie, 99% of IP's) would be assigned the current community consensus Anonnotice.

If we vary the Anonnotice by timestamp of page request, we could change it for five-minute intervals spaced arbitrarily. Alternating back and forth regularly (ie, at evenly-spaced intervals) would not be as useful, because we have to be able to estimate the time lag between viewing the message and contributing, in addition to estimating the effect of the message itself.


[edit] Additional Proposal Ideas

There have been many good suggestions about ways to improve donation drives and funrdaisers for Wikipedia; a specific idea follows. Please add other ideas in their own sections below.

What motivates readers to donate? How do changes to link placement and text affect donations?

  • During fund drives
  • During the rest of the year?
  • Stratified across frequency of reading contribution

What statistics could be gathered to help answer these questions? Both general and specific statistics are welcome.

  • Queries on an entire history-laded database
  • Queries on randomized subsets of pages/histories
  • Queries on randomized subsets of users
  • The # of readers per page, and their referrers; by date and time (anonymized)
  • The most popular search queries / terms; and the actions of those who entered them (anonymized)
  • ...

What variations could be tried out to study the influences on contribution and donor relationships over time?

  • Changing the anonymous sitenotice
  • Changing the reactions of a body of complicit editors to the target visitor
  • Changing the default site skin
  • ...

[edit] Notes

figure out the questions that we need to analyze so as to improve future fundraisers what information do we want where should we focus our drive

Last, what are the kinds of things we could to *learn* from a fundraiser that will be useful during future fundraisers? What are the kinds of appeals, etc, that *cause* increases in the probability people will donate and in the amounts of their donations?

In the context of discussing data, the best way to answer questions of that form is to run randomized evaluations of the appeals, etc, that are or will be under consideration, and randomized evaluations entail particular data requirements.

Randomization can be done by varying the page (or, more simply, the sitenotice) that loads for certain IP's or during certain seconds of the day; or by randomizing which previous donors (if any) receive email solicitations. Randomizations could vary the wording of the appeal; which people are promised matching donations (evidence from previous fundraising studies indicates that men are more responsive to matching donations than women); or who is offered swag as thanks for a donation of a certain size. The information that potential donors see about progress during the drive and testimonials (the "comment" field from previous donors) could also be randomized. There are surely other and better possibilities to test.

Running most of these randomizations would require coordination with developers, to implement the randomization and save clickthrough/donation data, and the randomizations may require considerable community awareness so that multiple approaches (ie, the treatments of the randomization) can temporarily coexist.

The first priority is definitely operationalizing the existing data, but the potential future gains from running randomized evaluations are large, and we would need to lay some groundwork if we want to pull them off.