After turning the MTurk crowdsourcing world loose on our project for about 30 hours, we decided that we’d stop and take a look at our results of the experiment, and see what we’ve learned so far. We’ll start out with the statistics, then we’ll follow up this post with an overview of our methodology and the things we learned on this initial test. Recap We took a list of 100 companies and asked if our MTurk workers could find the Last Name, First Name, Suffix, and link to a bio of the company’s General Counsel. We asked that the user only look on the company’s website to find the information, and we would not accept answers from external resources like ZoomInfo, etc. The question was answered in a “double-blind” method where each question was given to two different people to answer.Here’s what the actual MTurk questionnaire looked like:
- Actual hours used to answer questions – 6.3 hours This meant that each answer took about 2 ½ minutes to answer. Some of the questions were handled within a few seconds, while others took a five minutes or more.
- Average Pay Per Hour – $1.51 (+ 15¢ surcharge for MTurk per hour) When I looked at this, I immediately thought that I’d make a very good slum lord.
- Total amount paid – $9.50 to workers + 95¢ to MTurk ($10.45 Total) Again, not a lot of money, but from my research on the topic, the MTurk workers aren’t looking to make a lot of money, they are taking on these projects in their spare time. So, although I’d make a great slum lord, I guess I shouldn’t feel too guilty about it.
- Over 330 individuals accepted the tasks, but only 161 (48.8%) returned an answer Since we were only paying 10¢ a question, it is apparent that some decided it wasn’t worth the work, while others probably couldn’t find the answer and gave up.
- Questions Answered – 161 of 200 (80.5%) Initially, I was floored by the fact that over 80% of the questions were answered. Once I started diving into the answers, it became apparent that not all of the answers were what we were looking for.
- Total number of Answers that were “acceptable” – 95 of 161 (59%) Of the 161 answers we received, only 95 were deemed as correctly identifying the General Counsel of the company.
- Companies with at least one answer – 86 of 100 (86%) Although there was an 86% return rate, not all of these companies had correct answers (more below.)
- Companies that received a double-blind entry – 72 of 86 (83.7%) These were companies that had two different individuals return answers.
- Companies that received only one answer – 14 of 86 (16.3%) However, out of the 14, only about two were correct. The rest were either pointing to the Chief Executive Officer as the GC, or were just plain guesses at the answer.
- Double-Blind Answers where both answers matched – 46 of 72 (64.0%) These were answers where both individuals found the same name and url. Generally this is a good indicator that the information is correct. But, we found out that this isn’t always true (more below.)
- Companies where the General Counsel information was found – 52 of 86 (60.5%) We looked at the answers individually and discovered that 34 (39.5%) of the companies where we received some type of data back from the MTurk worker had incorrectly identified the wrong person as the General Counsel. Most of the incorrect answers identified the Chief Executive Officer as the General Counsel. In one case, the person answered “none” (which was technically correct, but wasn’t included in the stats above.)
When the test was over, we ended up with solid answers for 52 of the 100 companies, and a good guess that 20-25 companies probably didn’t have a GC on staff. That is a great result for something we’ve spent $10.45 to get. We’ll discuss more of the methodology and some of the surprises we found while conducting this test in later posts.