Let me start off by saying that I support the ideas, principles and declarations made by the 33 individuals that have placed their signatures on that statement. Putting the laws that we citizens live under in a format that is freely available, in a format that is not tied to a specific vendor or physical format, and having the institutions that write these laws responsible for making sure they are immediately available is a great idea. However, when I scan down the list of signatures of those supporting these ideals, I see something missing… and that is a signature from any members of the Judiciary, Legislative or Executive branch, state or federal, that would have the power to move these ideals forward. In fact, other than Judy Meadows, the State Law Librarian of Montana, all the other signatories are either members of top tier law schools, publishers, or individuals with no direct link to the courts or government agencies that they are asking to adopt these ideals.

It reminds me of when I was with the Oklahoma Supreme Court and we adopted the Universal Citation System for all of our cases based on the AALL Citation Format Committee’s guidelines. We started the process back in 1996 and by 2001 had all 60,000+ Oklahoma Appellate Court decisions online, with the vendor-neutral citations, and freely available to everyone. When I went up to a committee member in 2001 and said that we need to get the word out about Oklahoma’s success and start talking to other State Supreme Court Justices, Bar Associations and Court Administrators to get them to follow suit, you know what I got?? Nothing. The reason? The committee was no longer focusing on cases, and was now interested in finishing the guidelines for Administrative Rulings. The “process” of establishing additional guidelines trumped the “action” of getting courts to move forward on actually adopting and implementing the already established guidelines.

I really hope that the Law.Gov folks are not following in the footsteps of the AALL citation committee.

There is a lot of clout in having the signatures of Deans, Professors, and Law Librarians from distinguished schools like Harvard, Yale, Cornell, Princeton, Stanford, Berkeley and others, and that should not be dismissed as not important, because it is important. However, until you start getting some signatures from State Court Administrators, State and Federal Judges, Governors, State and Federal Legislators, and State and Federal Agency Directors, it is still an exercise in academia… and will never make it pass this phase.

Getting these types of signatures won’t be easy (if it was, they’d already be on there.) But, there are states out there that are doing very well at doing a number of things that the Law.Gov principles and declarations are asking them to do. Why not go out and recruit these Judges and state officials to sign in order to show their peers that it can be done, and what benefits they have reaped from already making their laws and regulations more transparent?? A State Supreme Court Justice is going to take notice of a peer from another state before he or she will listen to a law professor from an Ivy League school. Getting one of those signatures should be Law.Gov’s current mission.

[Note: See Ed Walter’s follow-up to this post – “Why .Gov is at the *End* of Law.Gov“]

I was at a party the other night and this guy was showing me his Android.

I really would like to have one but I was telling him that the touch screen gets in the way of my long nails. It is bad enough using the Blackberry keyboard with these nails.

Well, this guy showed me an app he uses: Swype.

With the touch of a finger, you can swipe out 40 words per minute. It is a QWERTY keyboard that is also smart enough to automatically identify blanks, capitalizations and misspellings. It also learns new words and adds it to its 65,000 word dictionary. and supports a number of foreign languages.

It’s been a while since we’ve mentioned anything about Wolfram Alpha’s “computational knowledge engine”, but they released a neat little tool to create your own widgets. The widget builder makes it very easy to set up your own widget where you set up your input and allow that input to be manipulated.

For example, I set up a simple widget to pull the nutritional value of something:

 I still haven’t found a lot of use for Wolfram Alpha when it comes to “legal research”, but it is still an outstanding resource for more technical type searching. Here are some good examples of other widgets that Wolfram Alpha featured on its blog today:

Very cool stuff!!  If you find a way to make this relevant to legal research, please share that with us!!

Two things jumped out at me in yesterday’s discussion on the PLL Listserv regarding paid online services:

  1. Firms are in the midst of a major debate on Cost Recovery
  2. Librarians are worried about the subscription services of the future

For #1, I have made clear my positions on cost recovery in both a blog post here and an article on the subject in Spectrum [pdf]. These lay out what I feel is a reasonable way to recover costs that is reasonable to both the client and firm. These costs are so onerous that Firms have to be able to recover them to stay in business. Whether it get’s built into rates, fees or remains a separate line item, doesn’t make a difference. The important thing is offsetting these huge expenses as much as possible to remain profitable.

As for #2, the future User Interface (UI for short), needs to be able to:

1) Search across all of my subscription resources

Our firms purchase content from multiple vendors and it is becoming more and more challenging for our attorneys to find the materials they need. They would be able to work better if they can access the online materials in the same way they do on the shelf. On the shelf, West materials sit in the same section as Lexis and so on. We have built a UI (Full Disclosure: our UI is hosted by LexisNexis) that allows attorneys to find materials based on how they work, not who publishes them. A tab is set aside for each practice and links to online materials are grouped in ways that mirror the way their workflow. For example, under the Litigation Tab you would have items grouped by Civil Pretrial/Discovery, Civil Trials & CiviL Appeals, not by West or Matthew Bender.

This is where I think WestlawNext tripped up. I have been involved in contracts for Lexis and Westlaw for over 20 years and every one of them was content-based. Over that period, we have seen some major evolution in both of these products, not the least of which was the shift from a software- to a web-based search platform. Not once during this time have I been charged extra for the changed UI. Now, if this allowed me to search across multiple platforms I might consider it worth the premium. But I can’t see justifying the extra expense to search just the same resources that I have been able to search for years without trouble.

2) Allow me to set the per search charge to meet my unique requirements (See the Spectrum Article referenced above)

3) Make research more efficient

There has been a lot of fuss this year made over WestlawNext, CCH IntelliConnect, FastCase and now Lexis Advance. At AALL in Denver, a distinguished panel of executives from West, FastCase & Lexis discussed how they developed their respective UIs. While the conversation was interesting (despite being sidetracked at one point into a debate over treatises vs. primary resources), what I found fascinating was the fact that none of the panelists addressed how they were designing their UI to make the attorney work more efficiently. They seem to have missed an important factor in crafting their products: What is important to Law Firms is not how the service finds the answer for the attorney/researcher but what allows him/her to do so in the least amount of time at the least cost. “Googlizing” legal research is not the answer. Having to go through reams of hits actually makes the user less efficient.

I think that vendors are not seeing the direction that Law Firm legal research UIs are headed. We want to organize our content our way, using a single point of access. And we want the systems to make the user a better researcher.

It sounds like Wylie is pouncing on the opportunity made available by Amazon’s announcement of its new Kindle to make a finer point on his battle over e-book digital rights.

Coupled with Kindle CEO Bezos’ prediction that e-books will outpace paperbacks by 2012, Wylie is now threatening to expand his negotiations from a 20-book deal with Amazon to now 2,000-book deal if publishers don’t get a handle on digital royalties.

This goes back to a post I wrote two days ago. Last week Wylie set up a publishing house called Odyssey Editions, and negotiated e-rights with Amazon for a two-year period on 20 classics.

Wylies says he is “only trying to make a point” so that the two revenue streams will be addressed together in all publishing deals.

Prior to the mid-’90s, publishing deals did not address digital rights. Any agents holding rights to books written prior to that time retain all rights, both print and digital.

Now Wylie’s got a stable full of some of the best modern literature on the planet.

Sounds more to me like he is trying to force Random House’s hand. And make a lot of money.

A recent article from electronista notes that “23% of IT managers may already use iPads.” First off – I am happy to report that my CIO is among this group.
Why is this stat important?
Although we tend to think of IT people as advanced thinking geeks, a good number of them over the years have become more like Mordac – The Preventer of Information Services from Dilbert. This is the case for very good reasons. I recall a frustrating conversation with the Director of IT at a former firm. He was adamant about stability of the network taking absolute priority over any technology innovations. When pressed on this, he noted that partners didn’t call and yell at him for not upgrading the office suite in a more timely fashion Much more frequently he received calls when things didn’t work right. What I learned from this experience is that IT was becoming and eventually became a force for status quo over innovation. It’s not that geeks don’t like cool new IT stuff. They just don’t want to support it for a bunch of impatient lawyers.
So I was pleasantly surprised to see IT managers embracing the iPad as a new technology. This suggests a shift in The Force. IT managers are seeing that more and more their customers (lawyers) are calling to ask why they can’t have an iPhone. They expect stability, and are now trying to keep pace with technology at their own level. And they are bringing this pain to the IT department.
Even more encouraging in the article was the comment that an additional 18% of IT managers plan to buy an iPad in the next year. I’m not suggesting the iPad is necessarily the next wave of tech for law firms. But these stats do represent a new trend for how IT is approaching technology in the enterprise.
My CIO is truly embracing this new paradigm for IT: “I can say from experience, that I was not particularly interested in the iPad. I had a mild curiosity, but would not lay down the ducats to play with one. However, having spent the better part of two months with one, I would not want to go without. The iPad truly is a transformative technology.”

Baker Botts announced yesterday that are moving into their next phase of “Associate Attributes Model [pdf]“, established in 2008, through the implementation of the “Talent Management Program“. The idea is to move Associates off of the lock-step model that has been prevalent in law firms, and toward more of a merit-based system that evaluates the talent and performance of the Associate to determine his or her “level” within the Associate ranks. So, old Army guys like myself can think of it as a pseudo-military officer ranking:

  • “Junior Associate” = 2nd Lieutenant
  • “Mid Associate” = 1st Lieutenant
  • “Senior Associate” = Captain

Just like in the military, the people you graduated with at the Academy (these are top firms… so no ROTC’ies) wouldn’t automatically get promoted at the same time. Three factors are used to determine the skills of the Associates, including “Core Attributes”, “Professional Skills” and “Interpersonal Skills”, and those go through a rigorous five step process by established individual and committees of supervising lawyers. Here’s a list of the five steps:

Step 1: Associates Are Requested to Identify Supervising Lawyers
Step 2: Departments Form Evaluation Committees Interview supervising lawyers
Step 3: Evaluation Committee Interviews Supervising Lawyers
Step 4: Data Compilation and Evaluation
Step 5: Associates’ Formal Evaluation Meetings

The overall purpose of this evaluation and three-levels of Associates is to respond to requests from the “clients for a clear demonstration that lawyers’ experience and capabilities correlate to their billing rates.” Of course, Baker Botts adds that they will still be paying their entry-level associates $160,000 a year.

It’s this last part (the salary) that makes me wonder why Baker Botts is going to all this trouble to look like they are breaking away from the old “business as usual” only to fall right back into the trappings of BigLaw version 2007 all over again? I won’t even get into the logistical nightmare of trying to get partners to sit in and effectively work on “evaluation committees.

I could be off base here, but it seems right now that it is a buyer’s market when it comes to hiring talent out of law schools, and it doesn’t seem like any firms are taking advantage of this. Why is a Texas-based firm that is hovering in the middle-40’s of the AmLaw 100 paying the same salaries for Associates as a top-5 firm? The talent pool isn’t shrinking, but the amount of talent that is being pulled out of that pool is. So, there is a lot more talent available for the picking… and many of them are not going to wind up in a big firm. Add to that the huge amount of talented 3rd, 4th and 5th year associates that are still looking for work, and you’d think that firms would be sitting pretty in getting talent at salaries closer to 1997 levels than at 2007 levels. My co-blogger Toby is the one with the Masters in Economics, but even I seem to understand a little about supply and demand (which seems to have alluded some firms out there.)

It just seems that adding levels and evaluating the talent of Associates is a good start, but if you’re just going to continue the same recruiting, hiring techniques used in the past, coupled with the “pay whatever the top guys are paying and we’ll get equal talent” idea, that you’re just not dealing with the reality of how to manage talent, manage salaries, manage billing rates, and manage client expectations. Perhaps these steps are coming in the next phase of the Associates Attributes Model?


As a literature lover, I spied this story in the Financial Times: according to the latest news on the e-book front, HarperCollins UK is joining Random House in its protest over the deal between literary agent Andrew Wylie and Amazon.com to create electronic versions of 20 classics.

Wylie’s classic collection includes Lolita, Invisible Man, Fear and Loathing in Las Vegas, Updike’s Rabbit novels, and my personal favorite Jorge Luis Borges’ Ficciones.

According to Random House v. Rosetta Books, any publishing contracts negotiated prior to Random House gave the author e-rights. After Random House, all standard publish contracts now give these e-rights to publishing companies.

So Wylie, living up to his name, broke his relationship with Random House to form his own Odyssey Editions e-publishing house for his stable of classics and negotiate his own deal with Amazon.

Note: Random House still holds the print rights to these classics.

The outspoken Authors Guild, which is watching the matter closely, suggests that Wylie may be engaged in self-dealing as he is acting as both buyer and seller. The Guild also raises concerns about the exclusivity of the deal with Amazon, hinting at possible antitrust issues.

If Wylie’s deal goes through, it could substantially change the amount authors receive upon digital sale, moving their rate from 25% to 60% of the retail price, and cutting out the middle–err–publishing companies.

Viva la authors (and their estates)!

But I would like to raise another point: due to Digital Rights Management (DRM) issues, these e-books could be as nebulous as the clouds on which they reside. If DRM is initiated by the publisher, the classic is neither a classic, nor is it timeless. DRM creates a limited download that may not be preserved on your Kindle.

There are known issues surrounding the Kindle download capabilities and transfer limitations. If you buy it on the Kindle 1, you lose it on the Kindle 2. So, in essence, the consumer is merely auditing a copy that can be revoked by Amazon at any time.

So my question to Wylie and the Author Guild is, if Wylie succeeds, is he invoking DRM to limit the accessibility of these classics?

And, to add another layer, how does Wylie’s agreement interact with Google’s Book Settlement Agreement?
As I have indicated in an earlier blog, I am no fan of the Kindle. Let’s just hope Wylie has some love for the art and its artists. But I really don’t have a good feeling about this.


Technologists often get approached with opportunities to solve problems using technology.  We sit down and talk about the user’s desires.  This happened to me last week – Billy called asking for help solving a problem, but once we sat down, Billy said “I read this article about this software product that firm x is using and we should have the software too”.  The challenge here is to get the focus back to needs rather than solutions.  Without refocusing the discussion, you are going to end up with more shelfware (software that is owned but not used).
Once refocused and with a clear description of the problem on paper, we started to carve out the solution requirements (the spec).  I asked Billy to describe how he would imagine using this tool.  I encouraged him to approach it like the proverbial kid in a candy store.  If you can have anything you want, what would that be?  This is an important step, it allows me to understand what language Billy is comfortable using.  I can pick up on key terms and reuse them when speaking with Billy.  It gives me some insight into Billy’s understanding of technology and how important he feels this solution really is. 
This is where the dialog generally goes in one of two directions:
  • very productive with a lot of dialog and enthusiasm
  • Billy simply grows quiet and makes a statement like, “just give me something to play with”.  Billy has disconnected and doesn’t feel I have an ability to understand his needs.

Here are a few suggestions to help prevent the disconnect:

  • Speak a Common Language – Often times technologists speak one language while the user speaks another.  This leads to communication breakdown and the user tends to disengage, feeling that the technologist just doesn’t get it.  Technologists need to learn the business and speak the business language.  This will serve the technologist well, giving him or her more credibility with the user.
  • Lose Assumptions – Just because I understand what you mean by x, y and z, you should not assume I also understand a, b and c.  We need to spend the time to really understand the entire process.  Without such understanding, we are going to have a partial solution at best, and often times, what gets delivered is far from what the user imagined.
  • Set Expectations – Most people have never worked on providing a technology solution; they have no idea of how long it takes to build, test, fix, re-test, fix and deploy.  It is important for the technologist to provide a good ‘guestimate’ of the time needed.  This is essential for setting expectations.  Without this, the user is going to expect the solution in a unrealistic period of time.  I cannot tell you how many times I have heard “Heck, my son can build a website in an afternoon”.

Solving technology problems is much like purchasing a house.  Is there a solution on the market that meets your needs?  This approach is like buying a house that is already built.  Depending on budget, you can get a house that meets your needs, but you will not have the flexibility to make the house exactly as you imagined.  For many this is great, the builder has already answered all those pesky questions.  If there is no solution currently available, then you need to build the house.  You will start with an architect and he or she will ask a bunch of questions.  How many bathrooms, how many bedrooms, three car garage, one story or two?  You get the point.  This process takes much longer than buying something already built, but you will get, budget willing, exactly what you want.  During those initial meetings the architect will not ask you what types of countertops you want in the kitchen and bathrooms, those types of questions come later.  The first thing is to agree on the basics of the house (the framework).  Likewise, with technology you need to start with the basic framework.  All too often the user is more interested in the placement of buttons or colors of the background than in discussing the problem.  That is like selecting countertops before discussing floor plans.

Providing technology solutions is an iterative process.  You start by framing the issue or opportunity and putting general ideas on paper.  Once both sides are comfortable with the framework, the technologists need to work through the build or buy process.  Is there already a technology solution available that addresses these needs?  If there is nothing on the market that meets the needs, you will need to proceed with building the solution.  The process must include the providers of the solution and the consumers of the solution, not just the original architect and the requester.  It must be an iterative process with everybody helping to create the solution.

Somewhere between reading Jason Wilson’s post on “Exploded Data, the Legal Web and What We’re Missing” and Toby’s post on “KM 3.0 = Analysis“, my brain started to smolder from all the ‘future of data’ discussion. Jason takes an example of a 33 word sentence and how 66 individual pieces of “exploded data” (which looks a lot like XML structured data) were extrapolated, and the ‘explosion of data’ could probably have easily continued on for at least another 66 categories. Toby had talked about the predictions that the amount of information stored in the world today surpassed the zettabyte threshold and is continuing to grow as we get “better-faster-cheaper search and retrieval systems.” Wilson ends his post with these two sentences:

The question is whether we will step up to organize this sea of data, or wait until a program can do it for us. If the latter, what does it say about the future of legal research and the practice of law?

I don’t think that the problem is a “man vs. machine” issue, but rather a how can man make machine better, and vice-versa. Regardless of the amount of crowdsourcing you throw at legal information, there is just no possible way for humans, alone, to explode the information. Add to this the fact that there is no way for a computer algorithm (at least at this point in history) to do this either. Right now, it has to be a combination of efforts of humans working to ask the right questions, set the right conditions, and bring in the right experiences in order to help the computer algorithms or data structure establish methods of creating “smart information.” The overall process is for the human editors (curators) to get the information more quickly from “dumb” to “smart” through the use of technology, then work on tweaking the technology from time to time in order to keep make the smart information even smarter. Just as with many industries, the core functions of legal editing will become more and more commoditized, and the work will focus on establishing the next piece of smart information rather than focusing on uncovering the hidden data of existing information. Toby told me today at lunch that currently “humans ask the better questions when it comes to how to handle information.” He went on to say, however, that there will probably come a day when machines may actually start being better at asking the questions. What does that have to say about the future of information?