Carolyn Elefant posted an interesting comment to Part 2 of our Crowdsourcing dialogue. She noted that “managing a crowd sourcing project can be difficult.” This brings us to explore our methodology and the art of crowdsourcing.

In a nutshell what we’ve learned is that managing crowdsourcing staff is unique. In crowdsourcing you pay for very discreet tasks performed by anonymous workers. We envision the emergence of crowdsourcing staffing companies to meet this challenge. Their role will be packaging and structuring projects in the most effective and efficient ways. In this new environment ‘employers’ will have to be quick on their feet to adjust to new worker behaviors and keep projects profitable.

Adjusting our Methodology

Ron Friedmann
in his comment to Part 1 suggests “recognition” as a motivator for our crowdsourced staff. Unfortunately, we don’t have much opportunity for that with an anonymous staff (although we’re open to creative ideas). However, we can make adjustments to our compensation method. In requesting researched information, we had three main categories of responses: 1) Obviously right, 2) Possibly right (or wrong), and 3) Obviously wrong. Our new approach will allow us to differentiate between these and pay the right amount for each type.

Instead of just a blanket ‘double-blind’ approach where we pay for two responses to all requests, we will only allow one response per request. Category 1 responses will be accepted and paid (almost half from our 1st experiment). Category 2 responses can be accepted, paid and re-posted for verification. Category 3 responses can be rejected and re-posted as many times as needed. This revised approach will give us better returns on our investment (such as it is).

Under Consideration

We think this adjustment to our compensation model will produce better results at lower costs. The next layer of modifications to our method will come through how we design our requests (tasks). One thought is to take unanswered requests and reformulate the question for re-posting. Perhaps campaigns of follow-on, modified requests will bring us our desired results. In our current experiment the request for a contact with the title GC could be modified to include broader terms or even out-sourced options. We might have to increase the payment amount in this situation (reflecting greater effort or knowledge), but that would make sense if the response information carried enough value.

Whichever direction this experiment takes it is proving quite intriguing. We feel we’re on to something and will keep exploring the crowdsourcing idea to see what we learn.