Continuing our Crowdsourcing post from yesterday, here are the answers we got back for the final six:
Firm Name: Jackson Walker
Article Title: OIG gets enhanced funding for increased enforcement in health care
  • The Economic Stimulus bill gave the Office of the Inspector General nearly 30 billion dollars simply for increasing oversight over health service and care providers.
  • The stimulus plan also gives Inspector Generals the right to review any contracts or grants given through stimulus money.
  • The Inspector General Reform Act of 2008 is another piece of legislature aimed at increasing the independent authority of the inspector general and increasing the efficiency of the OIG in oversight over healthcare organizations.
  • OIG data shows that for every $1 the govt provides in funding, it recovers $17. The amount recovered in 2009 is already greater than the amount recovered in 2008. This means that we must expect increased OIG funding and oversight.
Firm Name: Kilpatrick Stockton LLP
Article Title: What must your healthcare organization do (if anything) to protect against patient identity theft?
  • Healthcare organizations must be in compliance with the Red Flag Rules of the Federal Trade Comission and are to create a Identity Theft Protection Program
  • Mandatory compliance with the Red Flag Rules is due in part because Healthcare organizations are both creditors and they have charged accounts, the two requirements under teh FTC to be under the Red Flag Rules.
  • By August 1, 2009, Healthcare organizations must have a Identity Theft Program in place in order to deal with issues such as payment transactions, consumer reports security as well instituional procedures to identify and reduce the chances of identity theft.
  • This Identity theft program will require the approval and involvment of the board of directors, will require that workers in the organization be trained in aspects of identity theft, and have transparent relations with outside service providers.
Firm Name: Dorsey & Whitney
Article Title: Take precautions now to prepare for Influenza A type H1N1 (formerly `swine flu`)
  • With the threat of a disease such as Influenza A type H1N1, businesses should have a plan in place in case it affects its employees.
  • A specific emergency plan should be put in place in case the disease strikes, focusing on such issues as chain-of-command.
  • It is important to have open communication with employees via non-traditional means to track any cases of the disease that are reported.
  • Attendance policies and sick-day procedures should also be reviewed so that employees who might be sick have options.
  • Perhaps the most important way to prevent the spread of Influenza A type H1N1 is through education; everyone should be well informed.
Firm Name: Finnegan Henderson
Article Title: Federal Circuit affirms award of attorneys` fees for litigation misconduct
  • Case is a medical device patent-infringement suit (spikes) between two medical supply companies, ICU and Alaris
  • ICU has repeatedly and variously claimed infringement in the use of specific spikes by Alaris, each time being rejected
  • Court found that ICU failed to disclose and specify between tubes and spikes in their cases
  • Court ruled against ICU and awarded attorney’s fees for Alaris for those portions related to spike claims
  • Under The Supreme Court 9th Circuit precedent, the awards held up during a final appeal
  • [comment from MTurker] This was a very challenging one—I spent a good deal of time on it, and did my best. I hope it’s good enough!
Firm Name: Fulbright & Jaworski
Article Title: FTC Delays Enforcement of Red Flags Rule Until August 1, 2009
  • Cutting Medicare spending will take a lot of “new offices and positions.” I’ll bet his “Office of Spending Oversight” will need 500 new expensive “experts.”
  • Increasing a budget by $1.7 billion to find Medicare and Medicaid fraud abuse is abuse to the American public.
  • Allocating $311 billion to physicians over the next 10 years will not cut cost of services. Doctors are not going to make less money, so services will be cut.
  • Making subcontrators liable for fraud will not work. Once care is given good or bad, it is almost impossible to track who is responsible for what.
  • Work plans and every other Medicaid fraud prevention plant will only add more expense to the already over inflated budget.
Firm Name: King & Spalding
Article Title: Obama Budget Proposal Includes $309 Billion in Medicare Medicaid Spending Cuts; $1.7 Billion Increase for Fraud Control
  • The US 2010 fiscal budget will increase spending in health and human services by more than 7% upto 879 billion dollars.
  • There will be budget cuts in medicare and medicaid programs by 309 billion dollars in order to save for the healthcare reserve fund requirement of 634 billions dollars.
  • A large proporion of the money going into this fund will be coming from competitive medicare bidding between hospitals and healthcare providers.
  • The govt is also increasing the money it spents on identifying and preventing healthcare fraud, focusing 1.7 billion over the next five years in order to save 2.5 billion in fraud losses.
It tooks us from Saturday morning until Tuesday to get 10 articles reviewed. We’ll probably need to re-run the test at a later time to see if it was the “weekend” or the “price” that caused such a slow turn around on the project. My initial feelings are that it is probably a combination of the two.
This sort of task requires the MTurker to put some thought into the process. I really didn’t see a huge drop off in quality between the results that we got back at 25¢ versus the 50¢ results. But, there did seem to be an increase in the time it took to get the answers back. So, if you have more time, you can pay “less”, if you need something done in a hurry, then you need to pay “more”.
I have to admit that I was pretty impressed with the quality of the work. Regardless of if we paid 25 or 50 cents, the work was very good. I’m also stunned by the seriousness that the MTurkers seem to take with regards to the quality of the work. Take a look at the last bullet-point of the Finnegan Henderson article. A MTurker posted a comment saying that they had some difficulty with the article but hoped that their results were “good enough” for us. That really impressed me.
The more I test the MTurk idea, the more I see potential in crowdsourcing a number of projects that we’d love to do within the law firm setting, but generally don’t have the staffing to help us complete the projects. We’ll break down some of the other MTurk projects we tested over the past week and show you what we’ve found to be the pro’s and con’s of crowdsourcing.

Toby and I had some left over money in our MTurk Crowdsourcing account, so we thought we run a few more tests to see what kind of results we could get from the vast amount of potential workers. In the process of blowing the remaining $14.00 we had in the account, we learned a thing or two and thought we’d share our new found knowledge with you.

We ran 3 tests where we asked the following:
Test 1: Bullet-Point Reviews
Review the following legal article and give a synopsis of the article 5 bullet point of approximately 15 words each.
Test 2: Paragraph Reviews
Review the following legal article and give a synopsis of the article in approximately 100 words.
Test 3: Copy & Paste Article Title, Date and Intro Paragraph
Open the following URL and copy the Title, Date, and First Paragraph of the legal article found at that site.
For tests one and two, we took the same 10 legal articles and submitted them on Saturday and offered to pay 25¢ for each correct answer. The articles that remained on Monday afternoon were resubmitted and we then offered 50¢ for each correct answer. For test number three, we submitted 30 URL’s on Monday and offered 5¢ per answer.
Lessons Learned:
  1. Don’t submit anything on a Saturday morning.
  2. Two-Bits can still buy some things
  3. The more “mechanical” the work, the better the results
  4. MTurkers are people… some of them pretty smart people!
    (we even had one comment on this blog about her experiences)
For this blog entry, I’m going to show all of the results of the “Bullet-Point” MTurk project and discuss the what we went through and learned in the process.
First of all, we took 10 Health Care articles that were written by large law firms as either client alerts or general publications. We submitted the MTurk project asking the workers the following:

Write 4-5 Bullet-Point Reviews of the Following Law Articles

  • Each Bullet-Point should be approximately 15 words in length.
  • Do not repeat the title of the article or the author name(s) in your summary.
  • Summary must be in your own words, do not cut-and-paste sentences from the article.
  • Try to mention approximately 4-7 keywords relating to the article topic.

Law Firm: [Law Firm Name]Article Title: [Name of Article]Website: [URL Link]Bullet-Point 1: Bullet-Point 2: Bullet-Point 3: Bullet-Point 4: Bullet-Point 5: Please provide any comments you may have below, we appreciate your input!

The instructions were pretty specific, and we submitted this on Saturday and paid 25¢ for each answer. By Monday, we had 4 of the 10 answered and decided to “kill” the project at that amount and resubmit the final six at 50¢ per answer. Here are the four answers we received at the lower 25¢ rate:
Firm Name: Holland & Hart
Article Title: Internet Medicine Part IV: `Health 2.0`
  • Health 2.0 represents a new paradigm in the relationship between doctors and their patients
  • Many websites are evolving into the role of information providers, acting as intermediaries between patients and their physicians and care providers
  • Websites are key information providing sources to patients because U.S, physicians don’t receive compensation for providing medical advice to patients
  • Electronic medical records offer promise for an advertising-funded model for online information dissemination to patients
  • Privacy issues are highly likely to continue to be an impediment to personal health records online because of confidentiality concerns
Firm Name: Baker & Daniels
Article Title: EEOC issues reminder on ADA compliance when dealing with the H1N1 flu virus
  • Employers are limited in medical examination and disability information requests for current employees
  • Employers may not request disability requests or demands to prospective employees
  • Employers may inquire of new employees disability related inquiries, but these inquiries must be standard practice and may not segregate
  • Disability information acquired from employees must be carefully guarded and kept confidential
  • Recommends employers identify standard best practices related to medical outbreaks and emergencies, including options for telework, as long as these do not discriminate against those with disabilities
Firm Name: McDermott Will & Emery
Article Title: Senate Health Care Reform Policy Options: Medicare Advantage
  • There has been released of three particpated health reform option paper from the Senate which was voted April 29,2009.
  • With the new laws that have been passed enfornced the four most important things from the Medicare Advantage.
Firm Name: Dinsmore & Shohl
Article Title: Medicare DMEPOS competitive bidding program becomes effective April 18, 2009
  • Will affect DME suppliers in ten selected cities (Cincinnati, Cleveland, Pittsburgh) and surrounding areas
  • Replaces existing fee for service model with bids for competitively priced items in ten product categories
  • No effect on current Durable Medical Equipment (DMEPOS) suppliers
  • Medicare is interested in input from DMEPOS suppliers impacted by the program
  • More information in presentation by Mark A. McAndrews at Ohio Pharmacists Association Annual Conference
When we saw that only 4 of the 10 were answered by Monday morning, we began to question our logic of starting a project like this on a Saturday. So, we decided to stop this project on Monday afternoon and resubmit the remaining six articles at 50¢. It still took us another 24+ hours to get the remaining six answered, even after doubling the amount we were paying.
Tomorrow’s post will show the final six results and talk about whether more money + less time really equals better results.

Coca-Cola recently announced it was going to a value billing system with its professional services advertising agencies. The alternative fee topic is obviously spreading to many professional services industries, beyond legal and accounting. Doug Cornelius picked this story up from the Economist.

I attended a webinar on this topic that came to my attention via my connection with Ron Baker. His book was prominently displayed in the program and he was given credit (as it is due) for being a thought-leader on alternative fees. The Coca-Cola fee program is straight forward, going to fixed fees allowing for performance bonuses. Coca-Cola directly stated it wanted to reward value instead of activity.

Thought One:

The Coca-Cola speaker stated point-blank – This program is not about reducing costs. It’s about increasing value. I think this lesson would be well-taken by various in-house counsel looking to alternative fees as a magic bullet for controlling costs. As I have noted previously, clients and law firms will need to first understand the relationship between value and cost before they can truly impact cost. Otherwise they risk driving value down along with their costs. Coca-Cola obviously understands this.

Thought Two:

I have to chide Ron here a little bit (and know I will pay the price). It appears Coca-Cola (aka the client/buyer) is changing the pricing model. My past dialogues with Ron have touched on this issue. We have generally agreed that sellers influence pricing models and buyers influence price. Coca-Cola apparently doesn’t agree with us – defining their own pricing model. That being said, their marketing department is obviously very innovative. With some minor exceptions, I don’t see many in-house legal departments moving quite so boldly in this direction.

More movement on the alternative fee front is a good thing – even outside the legal arena. I’ll watch with interest for the experience and success of the Coca-Cola fee project.

I’m one of the uber-geeks that rushed home on Friday and opened my browser at 7:00 PM to watch (yes, watch via video feed) the launch of WolframAlpha. The launch wasn’t without its hick-ups (ran about 25 minutes late getting the video feed to work, then about an hour or so before the first bits of the website was ready for searches to begin.) But, that was to be expected.

Even the errors of having too many users hitting the database at one time was okay with me, and it was made kind of funny by the 2001-esque HAL to Dave error message.

I’m not going to go into a big review of WolframAlpha (if you’re interested, here’s a good one from SearchEngineLand). I will say this, though — It is a great “data compilation” resource. That means that it is not Google, nor any of the traditional search engines that you’ve used since 1994. WolframAlpha compiles data in a way that presents a result. So, if I wanted to know the average temperature of Houston, TX in July, then this is my resource.
For Legal Research, however, WolframAlpha just isn’t ready for that.
Here’s some examples of “legal” questions that I asked WolframAlpha:
All of these came back with no answers. [NOTE: There are some additional examples of legal searches on Legal Informatics Blog as well.]
In fairness, WolframAlpha is in its infancy and isn’t claiming to be a legal research tool at all. No one should expect it to answer all of these questions right out of the box, but I’m hoping that it can develop and expand its data collection abilities to begin answering some of these types of questions. To me, the “patents” question seems like something that can be integrated into the WolframAlpha database without much difficulty.
I’m happy that Stephen Wolfram has found a way to take his mathematical concepts and genius and figure out a way to pull different types of data together an produce a rational result. It is my hope that he can expand the WolframAlpha idea to expand into additional topics outside the hard sciences. I’m looking forward to the day that HAL comes back and says “Yes Greg, I can answer your patent question.”


Based on a some past posts, Maciek Janowski reached out to show us the Calvis Blackberry (BB) App for InterAction. Greg and I enjoyed a brief and informative demo.

In terms of functionality, it is exactly what you would expect. Your lawyers can access the information and tools in InterAction directly from their BBs. As lawyers have become so mobile, it makes sense to bring the information and resource from InterAction straight to their mobile devices.

Technically, the app is also what you would expect. It has a small server footprint on your Blackberry Enterprise Server (BES) and then a BB app that downloads to your device. Pretty simple. Maciek gave a recent example where the set-up took 2 hours for a firm and no additional hardware.

Money is always the next question. Law firms will pay an annual per seat price. Calvis has volume pricing – you can contact Maciek for more information on that.

Overall – InterAction on the BB makes sense. It opens up another layer of contact information for lawyers. Of course it will be a cost / benefit decision for each firm or lawyer.

One follow-up thought Greg and I had was generally on the relative lack of BB apps compared to iPhone apps. With BES – a lawyer has access inside the firewall which could mean access to a lot of enterprise information. Research actually turned up a number of BB apps like this (one as old as 2006). Our gut-level response is that BB is not leveraging its behind-the-firewall presence well. Perhaps its the functionality of the various BB apps or the cost to build them. Whatever it is – RIM better figure it out before the iPhone finds its way into the enterprise.

One of my favorite lines from the Roast of Larry the Cable Guy came from Greg Giraldo when he asked “How the #*^% are you so popular???” There are a lot of folks out there asking the same thing about Twitter. Especially after, as one blogger put it, “Twitter’s Spectacularly Awful 24 Hours.”

You can read about all the screw-ups that Twitter’s executives made yesterday, and their apology for all the mess in any of the other of thousand’s of blogs talking about it today. I wanted to touch on why it is that:
1) Why people care so much about the “TWawful TWednsday” and
2) Why it is that 99.44% of the users of the service will forgive and forget.

Why Do We Care??
Let’s face it… Twitter is run like it is a couple of kids in the basement of their parent’s house with a computer and a programming manual. But, in a way, that is one of the things that people love about it. It isn’t flashy, it isn’t complicated, and in fact, it may be one of the simplest forms of communication in the history of modern communication tools. When I started using Twitter, I said to a group of librarians in Dallas that “Twitter is the dumbest thing I’ve ever become totally dependent upon.” And that is still true today. Everyone is Tweeting about how Twitter is run so poorly, and how terrible the service is…. just look at the #TwitterFail tweets. I can only imagine that there were some people just frustrated yesterday that they had to wait until Twitter’s mid-day service shutdown was over before they could tweet about it.
Why Will We Forgive?
Twitter has built up some “Political Capital” with its users and it is going to spend some of it over the next couple of days to make up for yesterday. Evan and Biz have done a great job of embedding their personalities into Twitter. And, both have made hundreds of thousands of virtual friends. And, we’re more willing to forgive our friends of their faults than we would be of a “business.” So, just like Larry the Cable Guy movies (Delta Farce, Witless Protection), they make some pretty awful decisions (i.e., mid-day service shutdown, removal of @replies without telling anyone), yet the fan base still loves ’em.
What Can We Learn From Evan & Biz??
First of all, just because you are really really good at what you do, whether it be creating Internet communication tools, or practicing law… doesn’t mean you are automatically good at running the business. It’s okay to put your personal brand on your product (in fact, it is encouraged), but when it comes to handling complex business issues you need to find some experts in that area that can work behind the scenes. If Evan and Biz can’t do that, then it is really time to start thinking of selling off Twitter and start working on creating their next great invention.
“Political Capital” is a good thing to have, and a terrible thing to waste on bad decisions. Twitter has build a loyal customer base (excluding the post-Oprah crowd that is now disappearing), and this base is pretty forgiving. But, try not to be careless or take for granted that this base is always forgiving. You screw up enough where you’ve tarnished your image, then that’s when some competitor swoops in and takes advantage of your mistakes and starts bleeding your customers away.
The Key Takeaway — “Political Capital” takes a long time to build up, and a short time to destroy. So, don’t waste it!!

Carolyn Elefant posted an interesting comment to Part 2 of our Crowdsourcing dialogue. She noted that “managing a crowd sourcing project can be difficult.” This brings us to explore our methodology and the art of crowdsourcing.

In a nutshell what we’ve learned is that managing crowdsourcing staff is unique. In crowdsourcing you pay for very discreet tasks performed by anonymous workers. We envision the emergence of crowdsourcing staffing companies to meet this challenge. Their role will be packaging and structuring projects in the most effective and efficient ways. In this new environment ‘employers’ will have to be quick on their feet to adjust to new worker behaviors and keep projects profitable.

Adjusting our Methodology

Ron Friedmann
in his comment to Part 1 suggests “recognition” as a motivator for our crowdsourced staff. Unfortunately, we don’t have much opportunity for that with an anonymous staff (although we’re open to creative ideas). However, we can make adjustments to our compensation method. In requesting researched information, we had three main categories of responses: 1) Obviously right, 2) Possibly right (or wrong), and 3) Obviously wrong. Our new approach will allow us to differentiate between these and pay the right amount for each type.

Instead of just a blanket ‘double-blind’ approach where we pay for two responses to all requests, we will only allow one response per request. Category 1 responses will be accepted and paid (almost half from our 1st experiment). Category 2 responses can be accepted, paid and re-posted for verification. Category 3 responses can be rejected and re-posted as many times as needed. This revised approach will give us better returns on our investment (such as it is).

Under Consideration

We think this adjustment to our compensation model will produce better results at lower costs. The next layer of modifications to our method will come through how we design our requests (tasks). One thought is to take unanswered requests and reformulate the question for re-posting. Perhaps campaigns of follow-on, modified requests will bring us our desired results. In our current experiment the request for a contact with the title GC could be modified to include broader terms or even out-sourced options. We might have to increase the payment amount in this situation (reflecting greater effort or knowledge), but that would make sense if the response information carried enough value.

Whichever direction this experiment takes it is proving quite intriguing. We feel we’re on to something and will keep exploring the crowdsourcing idea to see what we learn.

After turning the MTurk crowdsourcing world loose on our project for about 30 hours, we decided that we’d stop and take a look at our results of the experiment, and see what we’ve learned so far. We’ll start out with the statistics, then we’ll follow up this post with an overview of our methodology and the things we learned on this initial test. Recap We took a list of 100 companies and asked if our MTurk workers could find the Last Name, First Name, Suffix, and link to a bio of the company’s General Counsel. We asked that the user only look on the company’s website to find the information, and we would not accept answers from external resources like ZoomInfo, etc. The question was answered in a “double-blind” method where each question was given to two different people to answer.Here’s what the actual MTurk questionnaire looked like:

The Statistics

  • Actual hours used to answer questions – 6.3 hours This meant that each answer took about 2 ½ minutes to answer. Some of the questions were handled within a few seconds, while others took a five minutes or more.
  • Average Pay Per Hour – $1.51 (+ 15¢ surcharge for MTurk per hour) When I looked at this, I immediately thought that I’d make a very good slum lord.
  • Total amount paid – $9.50 to workers + 95¢ to MTurk ($10.45 Total) Again, not a lot of money, but from my research on the topic, the MTurk workers aren’t looking to make a lot of money, they are taking on these projects in their spare time. So, although I’d make a great slum lord, I guess I shouldn’t feel too guilty about it.
  • Over 330 individuals accepted the tasks, but only 161 (48.8%) returned an answer Since we were only paying 10¢ a question, it is apparent that some decided it wasn’t worth the work, while others probably couldn’t find the answer and gave up.
  • Questions Answered – 161 of 200 (80.5%) Initially, I was floored by the fact that over 80% of the questions were answered. Once I started diving into the answers, it became apparent that not all of the answers were what we were looking for.
  • Total number of Answers that were “acceptable” – 95 of 161 (59%) Of the 161 answers we received, only 95 were deemed as correctly identifying the General Counsel of the company.
  • Companies with at least one answer – 86 of 100 (86%) Although there was an 86% return rate, not all of these companies had correct answers (more below.)
  • Companies that received a double-blind entry – 72 of 86 (83.7%) These were companies that had two different individuals return answers.
  • Companies that received only one answer – 14 of 86 (16.3%) However, out of the 14, only about two were correct. The rest were either pointing to the Chief Executive Officer as the GC, or were just plain guesses at the answer.
  • Double-Blind Answers where both answers matched – 46 of 72 (64.0%) These were answers where both individuals found the same name and url. Generally this is a good indicator that the information is correct. But, we found out that this isn’t always true (more below.)
  • Companies where the General Counsel information was found – 52 of 86 (60.5%) We looked at the answers individually and discovered that 34 (39.5%) of the companies where we received some type of data back from the MTurk worker had incorrectly identified the wrong person as the General Counsel. Most of the incorrect answers identified the Chief Executive Officer as the General Counsel. In one case, the person answered “none” (which was technically correct, but wasn’t included in the stats above.)

When the test was over, we ended up with solid answers for 52 of the 100 companies, and a good guess that 20-25 companies probably didn’t have a GC on staff. That is a great result for something we’ve spent $10.45 to get. We’ll discuss more of the methodology and some of the surprises we found while conducting this test in later posts.

After Greg’s post on crowdsourcing, he and I met to explore how we might push this envelope a little further. We decided to run an experiment to see how well crowdsourcing might work for a firm. We wanted to show immediate value and cost savings and to demonstrate which types of tasks might be handled this way.

The Setup

Tool: We (mostly Greg) decided to use MTurk from Amazon as the tool. After some research on price ranges, we chose to pay our workers 10 cents per task. That appears to be an average price based on what we saw and it’s cheap … like Greg.

Project: We know law firms love to have information about General Counsels (GCs), so we selected that as our task. Greg was able to pull a list of companies from two different markets. 50 from the mid-west and 50 from the SF Bay area. These companies are listed on the site for the project. Then we asked for the following from each company – GC first name, last name and a link to their online bio. The bio had to be from the company site. We let them know the GC may have another title like Chief Legal Officer or Corporate Secretary.

The Method: To test for and insure quality we did two things. First Greg ran a report from another system so we already know the answers to our question. Second, we allowed for two people to respond for each company GC. This ‘double-blind’ approach would serve as a quality check of the information we obtained. Our budget for the experiment – $22.00. $20.00 for the task payments and $2.00 for the MTurk fee.

Initial Response: Within the first hour we had 20% of our responses in at a cost of $4.24 per hour. The quality seems to be high, but a full analysis will come once we close the project.

Obviously the crowdsourcing approach will have limitations as to the types of tasks and information we collect. But our initial assessment is that this idea has merit. It appears to have hit Greg’s trifecta: Cheap, Easy and Fast.

More to come …

In the age of “doing more with less” there have been numerous things that we’ve just had to stop doing because the costs outweighed the benefits. For example, staff members may have tagged news articles by certain internal taxonomies in order to build daily newsletters that get sent to the attorneys in your firm. Or, you may have had data stewards that reviewed information that went into your CRM databases. Unfortunately, when the staff was cut, or ratios were reduced, these tasks could no longer be supported.
Almost all law firm administrative departments, ranging from KM, Library, IT, and even Secretarial Services, have all had to cut projects and tasks because there simply wasn’t enough time, people or money to complete the tasks.
Perhaps you’d like to outsource these projects, but even outsourcing can become too costly for simple tasks, and nearly impossible for ad hoc tasks that may only take a few hours of time to complete, but you just cannot take your staff off of current projects to work on these smaller ones.
There are businesses have been using the crowdsourcing techniques to help them proof-read documents, identify best photographs for a specific topic, suggest an improvement on an existing product, and even help create new templates for their sneakers. But, can a law firm leverage crowdsourcing? Are there specific information databases out there that we’d like to have at our disposal, but do not have the staff to compile, or we do not want to spend thousands of dollars to buy the information from one of the big legal vendors?
There are opportunities with law firms and crowdsourcing, whether it is tagging documents with legal topics, data steward work, or data compilation. I’m currently looking at some outsourcing and crowdsourcing projects that are out there (to be blogged about at a later date) — but, I thought I’d pose some quick questions out there for the reader:
  • Are there tasks/projects that law firms can ethically crowdsource?
  • Assuming that the price was right, would law firms even consider crowdsourcing?
I think law firms can take advantage of crowdsourcing…. now, whether they actually will or not remains to be seen.