Image [cc] Xtreme Xhibits

Whenever I try to explain to my friends and family what my job is as a pricing guy, they usually give me a blank stare. My kids have even comment they think I might actually work for the CIA since I can’t seem to explain it well. The reason is is that pricing jobs are unique in the legal profession and seem to change on a daily basis.

Thankfully Lisa Gianakos has come to the rescue, writing an article in the just-released edition of Practice Innovations. She interviews three pricing professionals (Matt Laws, Kristina Lambright and  Bart Gabler) and gives a nice look into what those of us in these roles do on a day-to-day basis.

So if you have ever wondered what we do. Or if you are one of us and struggle explaining it to your friends, you can now send them here.

Thanks to Lisa and Practice Innovations.

Oh yeah, there are a number of other excellent articles in the issue as well. I suggest you take a look.

With all this talk or blogging about AI, Big Data, metrics and analytics, pricing protocols, KM, Six Sigma and Lean and Agile, I wonder if I am working in a manufacturing shop or a law firm. In the world of manufacturing widget A can be compared to widget B, the two widgets can be taken apart, reverse engineered, put under stress tests and compared one against the other down to their composite parts. But if you’ve ever done what I call the website practice description test, you will know that law firms all use eerily similar language, nuance and style to describe what they do and for whom they do it.  And yet, each law firm is unique, there is something that makes one firm embrace AI  or LMP while another will shy away from anything other than the billable hour.  What then is the *real* differentiating factor for law firms?  Culture. 

Culture is hard to define, dictionary.com says this of culture: “the quality in a person or society that arises from a concern for what is regarded as excellent in arts, letters, manners, scholarly pursuits, etc.” Firm culture then, in my opinion is what a firm or firm leadership regard as excellence in the practice and business of law.  For some firms, culture is about billing, billing, billing, the relentless pursuit of commercial success and heaps upon heaps of billable hours. Though arguably an outdated machine like model this type of firm culture does still exist.  It is the kind of caricature you would imagine law firms would take on in a Tim Burton movie.  Other law firm cultures are built on a solid foundation of the hierarchy, ruthless behavior and one up-man-ship.  This is the kind of culture that the Anonymous Lawyer  blog and book were predicated on and to some extent do still exist. Many of us are familiar with these firms either by experience, by anecdote or by TV portrayals.  Other firms are not firms at all but a loose collective of lawyers with a firm culture that resembles a start-up or a technology company – think foosball tables, Macbook docking stations and white open concept spaces – rather than mahogany, privacy partitions, and gleaming reception areas. 

Success or failure of the firms to be commercially successful, embrace or refute technology, encourage new management roles and processes is, in my mind all tied to culture.  One culture does not necessarily suggest success and the other failure but the ability of a firm to pursue its quality of excellence – however they define and measure it – rests solely on its ability to maintain its cultural balance in every interaction.  Little gestures such as ending emails with “Smiles” or “no response required” or offering clients use of a firm’s meeting spaces or larger firm discussions around collaboration, sharing of financial data within the firm or making use of the Cloud in technology initiatives each point to the culture of the firm and reinforce for partners, clients, staff and business partners what a firm ultimately privileges.  I have often wondered how it could be that laterals who were floundering at one firm move to another and are suddenly rainmakers or lauded as being the best of the best in the business or how one firm can implement a new software tool at a significant cost while others wouldn’t touch that same tool even it was free. The answer is of course “fit” or culture.  Unlike in manufacturing where the goods produced undergo strict quality assurance testing, is it the people who work in firms each and every day that offer up the defacto QA testing, turning ISO (certification) into IMHO. 

As bloggers, it is our job to bring you the latest and greatest (or not) in law firm trends, technology, professional development opportunities and just plain intellectual sparring.  But I hasten to remind readers that each and every firm or in house legal department has a unique culture or signature that will determine what may or may not work in your specific firm or within the context of your role.  The glint of the shiny new toy is always appealing, but may not perform well for the work you and your firm are trying to do.  Over time, cultures change. Big Data, AI and LMP may run the legal world one day, but in the mean time remember, sometimes, it the simplest methods, tried and true that are the best fit for your firm’s culture right now. 

The third law of prediction from the late great Arthur C. Clarke, is that “any sufficiently advanced technology is indistinguishable from magic.”

If Sir Arthur were writing today I think he may have replaced ‘magic’ with ‘artificial intelligence’.

AI has become our modern sorcery.  It’s both our savior and our bogey-man. It will most likely show us how to be better human beings right before it destroys our worthless pathetic lives, because it’s a vindictive demon unleashed by godless computer scientists.

We define AI, like magic, as anything we don’t completely understand.

Is Google artificially intelligent? If not, why not?  You ask it questions, or just type in a few words, and it goes out and returns information from all over the world related to the question you asked or the words you entered.

“Well,” you say, “it’s a just complex algorithm running on really powerful servers, that takes into account the words I searched for and their prevalence on certain web pages, and then it returns those pages in order of decreasing popularity.  That’s not really intelligent.”

Most of us do not consider Google ‘artificially intelligent’ because we understand it.  Or more accurately, because we have a general heuristic to explain how it does what it does.

Watson winning Jeopardy?  OMG, it’s AI!  Oh, well actually, the computer was fed the clues in an electronic text format while the other contestants read/listened to them. Then it searched it’s massive data banks for relevant answers and gave a response. Kind of like the “I’m Feeling Lucky” button on Google’s home page, if it provided answers in the form of a question. (What are ‘Pictures of Pamela Anderson?’) Incredible, amazing, but if you’re like me somehow less impressive when you know it wasn’t listening or optically reading the questions. Why? It doesn’t really diminish the accomplishment, but it also feels somehow less “intelligent”. I think the problem is that, like the Wizard of Oz, we’re a little less impressed once we glimpse the man behind the curtain.

As an alternative to Clarke’s law, let me submit: “the more you understand how a technology works, the less likely you are to think it is magic/AI.”

So when Altman Weil asked this question about AI, it’s no wonder they got the responses they did.

  • In 2011, a MAGIC computing system called Watson defeated two former Jeopardy! game show champions, demonstrating the power of MAGIC. Since then, Watson’s performance has improved by 2.4 BA-JILLION percent and the IBM MAGIC Group is reportedly working with a number of legal organizations on a variety of MAGICAL applications for the profession. Can you envision a law-focused MAGIC ‘Watson’ replacing any of the following timekeepers in your firm in the next 5 to 10 years?  

OK.  So, I may have changed a few of the words in the original question, but I guarantee that’s much closer to the way most respondents actually read the question.
Here’s the thing, the question is flawed on many levels, but primarily because none of the answers are correct.  It’s just as wrong to believe that any of these jobs will be specifically replaced by computers as it is to believe that they will never be replaced by computers. 
The correct answer is: AI will enhance, change, and restructure what it means to work in a law firm.  It will change the nature of the work that lawyers and staff do.  It may reduce the workload so that fewer individuals are needed, or it may make it possible for more individuals to do that much more work, but it is quite unlikely that people working in law firms in 5 or 10 years will be doing exactly what they do now.
Is that the same as being replaced by AI?  Is there no need for car mechanics, now that cars are all computerized?  Is Memorial Sloan Kettering Cancer Center firing their doctors now that they’ve rolled out a version of Watson to do cancer diagnosis? Does IBM have plans to use their super-intelligence to phase out the engineers that built their super-intelligence?
It’s time to cut the hysteria surrounding artificial intelligence in law.  It’s not the all knowing super-intelligence that you think it is, but it is here now in many applications and platforms.  It’s not going away and it’s only going to get better over time. The only way that AI is going to replace your job is if you choose to think of it as magic that cannot possibly be understood, and consequently, you remain ignorant of it’s current capabilities and limitations.  
The best way to defend your job from the machines is to learn how they work and how to use them to do your job better.  And that is true whether you’re a paralegal, first year associate, partner, secretary, or technologist.
Download Flyer

I wanted to put out a pitch for the AALL Leadership Academy and suggest that if you, or someone that works for you, are looking to hone your leadership skills and network with experienced leaders and peers within AALL, then you need to take a look at this program.
The deadline for application is coming up on November 4th, and the Academy itself is on April 1-2, 2016 in Chicago (actually, Oak Brook at the McDonald’s University campus.) Below are some details about the Academy, and I’m sure that Celeste Smith, AALL Director of Education, will gladly answer any additional questions you have.

Lead Effectively: AALL Leadership Academy

April 1-2, 2016
Hyatt Lodge
2815 Jorie Blvd
Oak Brook, Illinois 60523

The 2016 AALL Leadership Academy will emerging leaders with essential leadership skills and tools to be an effective leader. The Academy has a solid reputation for building strong leaders who are prepared to meet the challenges facing today’s law libraries. Investing in emerging leaders builds a talent pipeline, supports the long-term mission of your institution, and supports the profession. The program will include discussions to explore key leadership concepts and current trends, assessments to identify strengths and preferences, small and large group collaboration, and focused development activities.

About the Academy:
  • Open to: AALL members with up to 10 years of experience
  • Cost: $575
  • Apply by: Wednesday, November 4, 2015, 5:00 p.m. (CST)

Criteria for consideration:

  • Library/law degrees held
  • Leadership and/or service record 
  • Statement describing the personal/professional benefits of attending
  • One professional recommendation

Why you should encourage members of your staff to apply:

  • The academy utilizes current leadership best practices and research-based techniques.
  • It is designed to give participants practical tools and strategies that will help them emerge as effective and confident leaders.
  • Effective and confident leaders inspire other staff, which creates a thriving organization.
  • Trained leaders take more initiative and support the organization’s strategic goals.

What participants will learn:

  • Leadership models, concepts, and myths
  • Effective and assertive communication techniques
  • Leadership styles and approaches 
  • Strategies for difficult conversations
  • Core leadership values
  • Motivational leadership
  • Influence and workplace ethics 

[Ed Note: Please welcome guest blogger, Susan Kostal. Susan is a longtime legal affairs journalist who also offers marketing advice and media coaching. Follow her on Twitter at @skostal.]

LMA Tech’s annual in-house counsel panel is always one of the biggest draws of the conference, and this year was no different. Last week’s discussion in San Francisco, moderated as always by Nat Slavin of Wicker Park Group, could be called the “one-size-fits-one” lesson. However, I’ve dubbed it the “come-to-Jesus” panel. Repent of your sins, outside counsel, and sin no more.

The panel consisted of tech-company general counsel, Michael R. Haven, Senior Corporate Counsel Legal Operations and Litigation at NetApp; Olga Mack, Head of Legal at ClearSlide ; Sharon Segev, VP of Corporate Development & General Counsel at Elo Touch Solutions; and Alexandra Sepulveda, Deputy General Counsel at Udemy.

Beyond the panelist’s tips and advice came a fair portion devoted to horror stories involving various errors by outside counsel. As Sepulveda commented, “How is it we are still having the same conversation about how to get personalized pragmatic advice?”

One running theme for these tech-savvy in-house counsel is that outside firms must use the billing and tech management software their clients use. It is the firm that should adapt and adopt, not the client.

Each in-house counsel said that excellence and understanding a client’s business will only get a firm in the door. In other words, it is expected, and not the exceptional. NetApp’s Michael Haven said the differentiators are:

  • new fee models, 
  • new technology to manage projects and 
  • proactive efforts to improve the attorney-client relationship. 

Haven drove the point home by commenting that “we want e-billing, so we can derive metrics. We would love to see them firms giving us metrics about their own business, and showing us how they are working more efficiently for us.”

Each panelist said they want hours and bills updated daily or three days at the latest. Haven likes ViewaBill, which allows him to see daily expenses. Serengeti also received high marks. “I can’t endure the expense of waiting for bill and then trying to mitigate the damage.” In other words, avoid surprises at all costs.

Haven says about 150 firms have adopted ViewaBill. “We see only our matters in real time. I can see what my firm billed me yesterday on a matter,” he said. Such software avoids nightmares that could cost a firm its slot on the outside counsel panel.

Olga Mack told of one patent matter which came in $500,000 over budget. As she went back through the bills, she discovered the law firm had increased its rates 15% Jan. 1, in the middle of the project, and never informed her. The firm lost its place on her panel almost immediately.

In addition to billing issues, there are technology habits within law firms that drive tech company GCs crazy. An enormous pet peeve is attorneys and firms that won’t use electronic signatures. Nearly all tech firms strive for paperless offices, and demanding inked hard copies is SOOO 20th century. Emails with “FYI” in a subject line are equally hated. In-house counsel want timely information they can scan immediately, with email subheadings such as “EVENT,” “IMPORTANCE,” and “ACTION” to be TAKEN.”

Tech GCs want quick-and-dirty answer in hours. They will wait longer for more complicated questions, but want to know their attorney is on it by acknowledging their email. A reply within 24 hours is ideal; three days is the outer limit. If attorneys can comply, in-house promise they won’t stage “fire drills” for information they don’t need immediately.

Mack also said she immediately judges an attorney on the quality, both the writing and content, of an attorney’s client alerts, as well as what’s on social media, such as LinkedIn and elsewhere. “I make an immediate judgment as to do I want to meet this person, and do I find this useful. I admit it is a bit like judging a book by its cover.” See her recent LinkedIn post on “The Art and Science of Being Useful to In-House Counsel.”

Additionally, more in-house counsel are now on Facebook and Twitter, not just LinkedIn.
Equally important are attorney bios, where in-house counsel typically start their research. 78% of general counsel use bios when choosing outside counsel.

Internally, in-house counsel share information about various attorneys and firms. A lot. Haven, who manages a global team of 80, says NetApp has “our own little version of FaceBook that we use internally. We collaborate on how we are dealing with outside partners, and discuss the pros and cons of certain partners.”

Regarding maintaining a strong relationship with a client, these panelists said they are amazed more firms don’t ask for 360 reviews of how the firm is doing.

In summary, here are the takeaway themes:

Selection/Competition

It has to be spot-on experience to get hired. It used to be OK to have general experience and good service, but now that’s just a gating issue.

Communication

Unless someone is on vacation, responding within 24 hours is the low bar. GCs expect outside counsel to at least communicate that you got the email even if you can’t do anything right now.

Get in Our Shoes and Stay There

If you are going to be a very effective advocate for the client, you need to understand what they do, how they make money, and what risks they face. Then your approach is more tailored to their goal.

Image [cc] Lucy Kimbell

I was watching Tim Corcoran’s video on “Useful Metrics & Benchmarking” and it made me think about how some of the metrics and benchmarking strategies apply to information professionals within a law firm environment. Are we, as managers of these professionals, giving them the right type of feedback that contributes to the overall strategy of the group, or are we asking them to hit benchmarks which do not mesh with the strategy? Tim makes a statement that “You can’t manage well what you can’t measure well.” Of course, Tim is talking about law department metrics, but I’m pretty sure the same concepts are applicable to law firm information professionals.

Here’s Tim Corcoran’s video. I have borrowed heavily from around the 7:36 through the 10:46 portion of the video (with Tim’s permission), and applied it to measuring the efforts of the Information Professional.


Metrics and Benchmarks for Law Firm Information Professionals
So what are Law Firm Information Professional metrics today? Let’s borrow and edit the list that Tim uses for law departments:

  • Costs (rates, hours, total cost)
  • Ratios (attorneys:InfoPros)
  • Write offs/downs
  • Repeat Work
  • Additional Costs / Adherence to Budget
  • Service (responsiveness, experience/specialty)

Let’s think about each of these and how we benchmark them and score our InfoPros to these metrics. I want to talk about these on a high-level, and not get into the minutia of what does and doesn’t apply on an individual level.

Costs – pretty straight forward metric of finding out how much the firm is actually paying for this person.
Ratios – this could be attorney to InfoPro, or it could be Practice Group or Office to InfoPro. However you want to measure it, just make sure it is consistent across the staff so you don’t start getting apples to oranges comparisons.
Write offs/downs – Again, pretty straight forward measurement of what did they bill versus what did we bill the clients… and what did the clients actually pay.
Repeat Work – Are attorneys or groups returning and asking for more work from the individual? I know that many of us, my group included, set up a pool of InfoPros to handle work as it comes in, but I think most of us have a reality that attorneys become comfortable with certain people that do good work for them, or are conveniently located to them.
Additional Costs / Adherence to Budget – Are the InfoPros aware of budget restraints for certain clients, and following those rules? If clients do not pay for online legal research, are they going in and using these tools anyway, or are they finding alternative methods to get information that will not cause additional write offs on the client invoice?
Service – How responsive are they? Do individual InfoPros have specialty knowledge of certain practice areas or are subject specialists with unique access to individual resources (usually allocated to them because of licensing issues or overall cost of these specialty resources)?

Mission and Strategy vs. Evaluation and Goals
Do the metrics we measure match our strategy? I’m going to guess that most of us have a mission statement (whether implied or official) that says something like this:

Our department serves the broad needs of our internal and external clients with a highly knowledgeable staff providing exemplary research results in an effective and efficient manner.

I’m sure there are a thousand different ways to say it, but effectively we have a mission of having high-quality people with great knowledge and research/analytical skills who get great results back to the client in a quick and cost effective manner. If that is our mission and strategy, are we measuring our people on those items and encouraging them to fulfill the strategy, or are we rating them and giving bonuses/pay increases or promotions on other metrics? Are bonuses and raises tied only to length of time at the firm, or are there measurable items used to determine how these are allocated? Are there non-monetary incentives available to reward those who score well on certain benchmarks.

Scorecards
Can we take Tim’s idea of using Scorecards to let the InfoPros know where they stand within the firm and perhaps even against their peers? If the only time that your staff knows how well they are doing against the benchmarks which they are being measured is at review time, then that’s bad management on your part and unfair to everyone.

On Tim’s example of scorecards, he lists a number of measurable subjects that can easily apply to Information Professionals, but in reality, it needs to be modified to fit the benchmarks you are asking them to hit. The scorecard should both be presented as a constantly updated piece of information, as well as a snapshot of how well the person is doing in a set period of time (monthly, quarterly, etc.)

Again, borrowing from Tim’s list, let’s think about what InfoPros would see on their scorecards, and where they stand in relationship to those benchmarks.

  • Practice Summary – What have they worked on? Bullet points of projects assigned and completed. Perhaps this piece of the scorecard is kept by the InfoPro themselves, or a joint effort. 
  • Top Billed Matters/Clients – What matters or clients are InfoPros constantly asked to cover? Perhaps expand that to practice areas if they are subject experts.
  • Spend by Practice Group – What are the costs associated to the InfoPro when they are asked to perform a task? Look at time spent, resources used and charged to clients. Don’t forget to measure things like training or client development tasks that may not show up on the client invoice.
  • Tasks or Hours Worked vs. Peers – I know that some of us don’t like this type of competition within the group, but if we are going to allocate bonuses or future pay based on how well they compare to others within the group, it may be fair to expose that information throughout the year so that they know where they stand.
  • Client satisfaction – what type of feedback are they given from those requesting their assistance? Can you get measurable feedback (1 to 5 scale) from internal clients? Perhaps your reference 
  • Adherence to Budget – are they following guidelines, or are there additional expenses incurred when using certain InfoPros?
  • Time Entry Turnaround – Is there a lag between time worked for a client, and the time entered for that client?

Metrics vs. Gut Feeling
Tim doesn’t get into this area directly, but I think it is a logical step in this conversation as most of us may be very uncomfortable using these types of metrics to judge people who report to us on a daily basis. It would be much easier to simply give feedback and measure performance based on what we experience with the Information Professional on a personal level. I’m not saying that personal interaction and experiences should not be used in evaluations, but it should not be the only measurement. There simply has to be benchmarks that have clearly defined measurements, and transparent to those being measured as to where they stand within the benchmark.

Where the gut-feelings and individual experience comes into play is putting a narrative to the metrics. Metrics give us data, but may not tell a complete story. One prime example would be that some individuals may have high write-offs simply because they do a lot of work for attorneys that won’t pass along their costs to the client. This is where the periodic reviews of the scorecard comes in handy. By setting the metrics and monitoring them, you begin to understand the story behind the data at an earlier point in time. This will allow you and the InfoPro to take corrective actions either by correcting internal behavior, or client behavior.

I’ll bring Tim’s comment back of “You can’t manage well what you can’t measure well.” Establishing metrics and benchmarks based on the overall strategy of the department and the firm will help the InfoPros work toward that strategy, and will allow you to measure how well they are doing in achieving that strategy, as well as how well your overall department is working toward promoting the firm’s overall strategy.

Benchmarking the Department
I’m going to borrow one last thing from Tim from around the 10:30 mark in his video where he talks about the law department leveraging the scorecards as an integral part of the overall business needs of the company, and apply it to the efforts and mission of the Information Professional department.  In establishing clear benchmarks that drive overall strategy of the firm, you can leverage this process and become so closely linked to the law firm’s strategy, that your groups becomes a competitive advantage for the firm. Firms are always looking for a cost advantage, or a unique skill set for its marketing to clients. How about leveraging your department to help the firm get to clients faster, overcome obstacles more quickly, manage information and knowledge more effectively so that we operate more effectively in our markets. Tim concludes this section of establishing metrics that show your impact on the overall business with a thoughtful statement. “Think about that. A metric that helps us understand the impact we’re having on the business throughput can be pretty powerful.”

Thanks again, Tim, for letting me morph your legal department concepts for metrics and benchmarking onto the world of legal information professionals.

Today, Jones McClure Publishing is now simply O’Connor’s. For those of us in Texas, this won’t be a huge surprise or much of a change in the way we think of the company’s brand. Most of us call everything they publish after Judge Michol O’Connor, who started the company nearly twenty-five years ago, and is still involved in the company although her son, Baird Craft, is now the President of the company. The biggest deal will probably be sending a note to the Accounting Department letting them know to change the vendor name on the accounts payable system, and some changes in the website address (now oconnors.com) and having to update my email contacts.

Now I could have mis-remembered the story of how Judge O’Connor came up with the “Jones McClure” name of the company, but it was definitely a made up name, since there were no Jones or McClures involved in the creation of the company. At one of the AALL annual conference dinners, Judge O’Connor got up and told the story, and I seem to remember that the company was named after a couple of the delegates from the Second Consultation and Convention where Texas sought independence from Mexico. Delegates Oliver Jones, and Bartlett D. McClure were present during the April, 1883 convention and thus, the name Jones McClure was created. I see that Charles Baird was also a delegate, so I assume not only was the company named after some of the delegates, but also the current President of the company. (Baird, forgive me if I am jumping to too much of a conclusion here… as, there may have been a couple of glasses of wine consumed during the story.)

One of my favorite people, Jason Wilson, is the Vice-President of O’Connor’s, and I asked him for a story to tell about the name change. Here is what he had to say:

For years, whenever I would meet someone cold—say an attorney or librarian—and told them I was with Jones McClure Publishing, I would always get a polite nod. But when I followed it up with, “We publish O’Connor’s,” the reactions were always the same: “Oh, I love you guys! I have all your books.” This is an experience each of us at the company has had. So, when we moved into the digital space with O’Connor’s Online and then started planning for the new web store, we decided to retire Jones McClure Publishing in favor of the brand we’ve worked so hard to create and something our customers recognize immediately as quality products and services. So now we are just O’Connor’s.

For those of us that rely upon the twenty-five titles published by O’Connor’s, or the new online resources, and those attorneys we support who enjoy the detailed information provided within those titles, in an easily understood writing style, we applaud the official name change to better recognize Judge O’Connor’s vision of creating legal books, minus the legalese.

[Ed. Note: Updated at 2:02 to include Wilson quote. – GL]

When I graduated from high school, I knew three languages. I was fluent in two and had a fairly good working knowledge of the third. Today, my second and third languages are a bit rusty but I can get by when spoken to or making an inquiry.  Yet, I feel compelled to learn another bunch of languages. I fear that if I don’t I won’t be able to talk to my children let alone my future grandchildren, I won’t be able to maintain my job or advance in my career and I most definitely will lose any trace of being as a worldly individual if I can’t “speak” Ruby, Python, SQL, Java or any other of the languages Mashable tells me I need to learn right now. I’m not kidding, I have looked at Women Who Code, Lynda.com and other sites, unsure of where to start and how to deal with the overwhelming sense of coding disability.  I feel pressure to be something I am not in response to the changing market. Apparently, even long after high school, peer pressure doesn’t go away, everyone is coding and I need to too! 

For me, coding is symptomatic of a bigger fear. A fear that until recently I thought I was shielded from in some respects, being a law firm marketer, business development, and CI person.  My real fear is Big Data.  The stuff collected passively in the background, sometimes for a reason, other times just because the technology is there to collect it.  Law firms are jumping on the Big Data/BI train, a train travelling so fast that Dennis Hopper would be proud.  True, I did collaborate on a book some three years ago with the Ark Group on Business Intelligence for law firms and in that publication, I did write about data and how firms can increase efficiency and provide better value to clients using scraped data and robust analytic techniques. All of which, still stands. I do believe that there is power in the data we collect and even more power to be harnessed if we can find meaningful ways to make sense of that data.
I have no illusions of becoming a data scientist, nor do I suggest that all law firm marketers need to be comfortable dabbling in data, but data, like “information” circa 2005, data has become ubiquitous and the legal industry is far from immune.  In August, DataFloq the self described “One Stop Shop for Big Data” published a post on how a variety of law firms are tapping into the data revolution.  You can read it here: How Big Data Can Improve the Practice of Law . Companies and thought leaders blogged about on 3 Geeks in the past, such as LexMachina or David Perla of BloombergLaw openly discuss the impact data has, will have and should have on both the practice and business of law.   Some firms, like Littler Mendleson have even created roles for national directors of data analytics in an effort to woo general counsels. 
Big Data, like the coding languages and algorithms needed to support extracting insights from the data is real and not going away.  We live now and forever more, in a data driven world where we have the capacity with relative ease to know, for example, that a clause we are putting into an agreement is used 5% of the time in a industry, 76.6% of the time in a type of contract, and .9% of the time in a particular jurisdiction.  The same intensity of analytics can be applied to the kind of coffee we drink, the clothes we buy or litigations our firms undertake and their probable success rates.  Statistical probably will soon run the world.    That’s what scares me.  I’m all for data driven hypothesis and fact based learnings.  But I think there is something to be said for the human element – the syntax, the colour, the qualitative nature and value of being graciously flawed and human. In law firms there are vast amounts of unstructured data, the art and craft of lawyering is often about interpretation and word nuance. 
There have been cases won and lost on the placement of commas in an agreement, beautiful examples of the dynamic nature of language that no machine learning, no data as I see it, will ever be able to understand.  We need data and the ever increasing pace at which we are expected to respond to client needs demands that we use the data available to us in new ways, requiring us to speak to new languages. I am not naïve to this I am scared though, that as we wade through the data, we will drown in a pool of objective facts and figures.  I liken it to knowing the score at a sporting event without knowing what happened, player and team stats will go up and down, predictions about winners and losers can be made but the great plays, the epic bat flips and the amazing free shots would be lost.  Imagine sports coverage as score keeping without the colourful narrative. This is what I fear will happen if we spend too much letting Big Data and analytics aid and even replace our subjective discussions and decision making. Not everything can or should be reduced to a series of numbers and equations.  There is value in the narrative, nuggest of gold in the telling of the story. 
The idea of Big Data is hard to resist, like the shiny glint of a smoking silver bullet in a haystack (metaphors mixed for emphasis). We like Big Data because its clean and seemingly perfect. It lets human imperfection off the hook.  But we know from experience and the history of humanity that we are not perfect, there is rarely a silver bullet, and statistics, though based in numbers can often be skewed or misinterpreted.  So humour my fears as we propel ourselves forward at a rapid pace on the Big Data train in law firms and in life and lets add some colour to our use of Big Data. 

Tips on coding lessons welcome and accepted. 

90% of people don’t know how to use CTRL+F to find a word in a document or web page. Instead, they search the old-fashioned way, manually skimming the text.

This preponderance of ignorance is stupefying to me. But I want to be very clear that I am using the word “ignorance” in its most neutral form–i.e., lack of information or knowledge–rather than to convey any judgment or pejorative connotation. Ignorance is unavoidable. The only settled part of the debate as to who was the last person to know everything is that the person is long dead.

The curse of ignorance is that you don’t know what you don’t know. Previous posts have touched on this obstacle of metacognition, and our ignorance of our own ignorance. But there is another side of the coin: the curse of knowledge. The curse of knowledge is that once we know something, it is really hard to imagine not knowing it. This incapacity undermines communication and, especially, instruction because of the lack of shared information and assumptions. If I, for example, were going to put together some tips on internet research, I doubt that, absent the article cited above, I would have thought to include CTRL+F. I would have assumed that most everyone already knew it. I would have been wrong.

Indeed, I am a posterchild for both curses. I’ve told the story many times that my inflection point in using technology involved a client discovering that I printed and scanned to create PDFs. But how was I supposed to know what I didn’t know–there’s an app for that–without already knowing it? Yet, several years later, I delegated a task where one of the steps involved converting a large volume of documents into PDF. I was shocked (shocked!!!) to find that the person was spending hours printing and scanning. I assumed that because I knew how to convert a file to PDF, they knew it, too, despite the fact that I had been Exhibit A that this was not knowledge everyone possessed.

Thus, whether we know something or not, we too often assume that others know it. The tech-averse frequently fall into the trap of thinking the tech-comfortable know everything there is to know about tech. And those who know tech sometimes assume that others do, too. Both curses are reason that competence-based assessments are such excellent training tools. Figuring out what people do and do not know is superior to speculation. But assessments alone are not enough. The primary objective of identifying gaps is to tailor the training to fill them. In this regard, I have been an abject failure in speaking to law school classes.

I speak to law school classes for free. I provide them a copy of my Legal Technology Assessment (“LTA”) for free. I then provide a copy of the LTA Training Edition (which pairs the competence-based assessment with synchronous, active learning) for free. Finally, they can retake the LTA (for free). Not only do have the opportunity to address identified deficiencies in their skill but a qualifying score is also something they can add to the bottom of their resume to replace the meaningless “proficient in MS Office.” After speaking to hundreds upon hundreds of students, I’ve had exactly zero take me up on my full offer.

The class I wrote about last week is representative. Twelve students took the LTA because it was a class assignment. The results (below) were bad, as usual. I spoke to them for 40 minutes and offered the Training Edition to anyone who wanted it. Only two of the twelve emailed to ask for the Training Edition. And, if history is any guide, neither of them will return to take and pass the LTA.

In approaching these classes, my idea is that taking the LTA beforehand will puncture delusions of adequacy. We won’t get bogged down in an abstract conversation about how fluent they are with technology. 32% correct on some fairly simple Word tasks leaves little room for debate:

Pretty bad but not unexpected. As I try to communicate to them, it is not their fault. Everyone just assumes that they know things that they had no way of knowing absent training. They are not stupid, lazy, or untalented. They are smart, hard working, and full of promise. They simply lack training in one particular area that has the potential to make their lives better.

On the issue of their immediate future, I point out that their most recent predecessors are miserable human beings. In fact, the students are auditioning for the unhappiest job in America.

I then try to persuade them that technology plays a role in this dissatisfaction. Before technology takes our jobs, it can make them easier. At least, in theory. The technology has to actually be good, and we have to use it correctly. Otherwise, it is a source of frustration rather than leverage. Technology initially substitutes for labor at the most severe pain points. Machines can reduce the hours spent reviewing, proofing, conforming, collating, updating, and otherwise fiddling around the edges of the substantive legal work. Using technology well can improve both speed and accuracy, as I try to convey in the video below, and thereby alleviate a fair amount of the agony associated with being a young lawyer:

My contention is that having the right technology and learning to use it correctly will permit legal professionals to reduce the amount of their finite time and attention that is directed towards misery-inducing busywork. I’ve added to my spiel some recent confirmation of this theory from the cover story of last month’s American Lawyer. AmLaw’s annual associate satisfaction survey found that technology, including technology training, has a material effect on satisfaction:

One unsung key to retention could be technology. We found that overall satisfaction of midlevel associates, as measured on our survey, was strongly statistically correlated to their law firm’s scores on four questions involving technology. (The questions ask respondents to rate their firms’ technology generally, as well as technology training, support and use of technology in meeting client needs.) 

….In fact, eight of the top 11 firms in the national satisfaction rankings also were at the top on the technology questions. Conversely, many of the firms that occupy the bottom of the national satisfaction rankings also place low in the technology survey.

The AmLaw conclusions comport with an earlier study I cite from the National Conference of Bar Examiners that surveyed recent law graduates about the most important skills for young lawyers. Out of 30 skills, using basic office technology ranked 6th:

Seeing basic office technology ahead of legal reasoning is a bit jarring, even for me. But the incongruence is heightened by the fact that, unlike the rest of the skills listed above, using technology is not taught in most law schools (or, generally, in most colleges or high schools). 
Then again, the idea that law school is not geared towards turning out practice-ready lawyers is well-worn territory. As discussed in Mark’s previous post, a LexisNexis survey found that “95% of hiring partners and associates believe that recently graduated law students lack key practical skills.” The dissatisfaction of associates is mirrored (and, maybe partially driven) by the dissatisfaction with associates. This is not just abstract griping. Anecdotally, partners report writing off massive amounts of associate time for perceived inefficiency. These claims appear to be borne out by the Georgetown Law and Peer Monitor realization data (which I dug into here):
 

So that’s my story. You’re great. You just haven’t gotten the training you need in technology. This training will benefit you directly in the form of improved satisfaction and performance. Here it is, for free. Followed by crickets.

I’m not quite sure how to interpret my utter inability to make any progress with these students (thankfully, the people who actually pay me are considerably more engaged). Am I, yet again, suffering from the curse of knowledge? Is there some assumption that I am making about these students that is impeding communication? As I try to put myself in their shoes, I increasingly come to conclusion that there isn’t anything I can say.

In general, it is challenging to get anyone to use their precious spare time to buckle down and really learn something new, even if they are persuaded that they should. The last time I decided to tackle a new area of study, I felt compelled to pay for online courses that included tests and graded assignments. I needed real stakes and real structure to have the discipline to systematically engage with the material (all of which I could have found for free on the internet). Here, the students took the LTA as a diagnostic because it was an assignment, and, I have no doubt, that they would have trained for and passed the LTA if that were assigned. As a law student, I suspect I would have behaved much the same way (I know my scores would have been just as bad).

Stakes and structure matter. These students have had both all their life. From speaking to them, I get the sense that they believe this will continue. They believe that law school is designed to prepare them for law practice. They believe that whatever they do not know upon leaving law school, their firms will teach them. And, more than anything, they believe that they do not need to worry about this tech stuff because they will have secretaries to do it for them. More on that last point in my next post.

For me, the primary myth of the digital native is that, by virtue of their age, they already know what they need to know with respect to using technology. The corollary myth is that which they do not already know is not worth learning. But there exists a softer formulation that hits much closer to the truth. Rather than automatically knowing that which they need to know with respect to technology, we (and they) tend to believe that people who grew up with technology have the capacity to learn it and will do so when the situation requires. It’s that last part, however, where there continues to be a disconnect.

The older generations seem to think that the situation will somehow mandate the acquisition of new skills. In this, they are not totally wrong. Most people, including the older generations themselves (with their fancy new iPhones and Surface Pro 4’s), learn what they need to learn to get by with technology. Some people learn more. But most satisfy the bare threshold of survival. This results in massive underutilization of extant technology. And study after study has shown that younger generations are the same as their predecessors in this regard–i.e., learn the minimum to get by.

The younger generations, on the other hand, think that they will quite literally be required to learn it. Someone in a position of authority is going to lay out a curriculum, objectives, and a timeline. At that point, they will do what they’ve always done: work hard to meet the expectations set for them. A few will fall short. Some will excel. But most will quite effectively do what they are asked to do. I, for one, think we ought to oblige them.

At some point, I will dig deep into my data. But, on average, people (lawyers and staff) in practice outperform the kids in school on the LTA. In part, this reflects a general raising of the baseline as the skill set required for bare survival expands upon entering the professional workforce. But there is still significant interorganizational and intraorganizational variance.

The variance between organizations appears to be entirely attributable to mandatory training. Different organizations have different attitudes towards training (is it available, is it mandatory, does it include competence-based assessments) that, unsurprisingly, have an appreciable impact on how well trained their employees are. The variance within organizations stems from outside training. Frequently, I learn that the person who outpaced her colleagues on a diagnostic assessment had some previous career that demanded a more robust technology skill set. Sometimes, I meet people who, like me, had some sort of rude awakening and decided they did not like being embarrassed. Every now and then, I encounter a true tech geek (meant with love and affection) who happens to also work in law. My own data reinforces previous empirical findings that, rather than age, facility with technology is a product of “breadth of use, experience, self-efficacy and education.”

Technology training is important for everyone, including the digital natives. I just wish I could convince them of that.

++++++++++++++++++++++++++++++++++++

Casey Flaherty is the founder of Procertas. He is a lawyer, consultant, writer, and speaker focused on achieving the right legal outcomes with the right people doing the right work the right way at the right price. Casey created the Service Delivery Review (f.k.a., the Legal Tech Audit), a strategic-sourcing tool that drives deeper supplier relationships by facilitating structured dialogue between law firms and clients. There is more than enough slack in the legal market for clients to get higher quality work at lower cost while law firms increase profits via improved realizations.
The premise of the Service Delivery Review is that with people and pricing in place, rigorous collaboration on process offers the real levers to drive continuous improvement. Proper collaboration means involving nontraditional stakeholders. A prime example is addressing the need for more training on existing technology. One obstacle is that traditional technology training methods are terribleCompetence-based assessments paired with synchronous, active learning offer a better path forward. Following these principles, Casey created the Legal Technology Assessment platform to reduce total training time, enhance training effectiveness, and deliver benchmarked results.

Connect with Casey on LinkedIn or follow him Twitter (@DCaseyF).

I am disappointed every time I guest lecture a law school class.

Because anecdote is often more compelling than data, I’ll start with an example from two weeks ago. An adjunct professor who teaches one of those great law school classes with cool titles like Tomorrow’s Lawyer had his students take the Word module of my Legal Tech Assessment. They performed exactly as you (well, I) would expect of an untrained group: poorly. Their anonymized scores are below:

I’ll explain the scoring in a subsequent post. But, for now, just focus on the accuracy. On average, students were able to correctly complete less than a third of the following tasks in a live Word document:

  • Accept/Turn-off changes and comments
  • Cut & Paste
  • Replace text
  • Format text (font, margin)
  • Footers
  • Insert hyperlink
  • Apply/Modify style
  • Insert/Update cross-references
  • Insert page break
  • Insert non-breaking space
  • Clean document properties
  • Create comparison document

To put it in anecdotal context (though these numbers are representative of the larger data set), around the same time the class was taking the assessment, a similarly-sized pilot group at a large law firm achieved an average accuracy of 68% while a group of lawyers and staff who had just been through LTA-specific training achieved an average accuracy of 95%. Training matters.

Training matters even for so-called digital natives like the law school students I’ve tested. Acquiring a Twitter account in utero does not engender natural facility with technological tools because most technological tools are not intuitive.

My wife thinks my sons are geniuses. One piece of evidence she submits in their favor is how well they use an iPad. I agree with her that the fact that my 1.5 year old can use an iPad is a testament to genius. But not his. For me, it is a testament to the genius of the designers at Apple who created a device so intuitive that 1.5 year old can use it. The kid touches a picture, it moves. Congratulations to him!

While we can discuss design principles that would move work software in the direction of our consumer experience, I don’t actually believe that individual apps are the best basis of comparison for complex business software. Rather than thinking of Word as an app, we should think of it as a bundle of apps. Each of the icons on the ribbon is a solution to a particular problem. The challenge is that there are so many icons (just as there are so many apps). While design principles can bridge some of the gap, there remains a tradeoff between depth and intuitiveness. Most of us therefore only become comfortable with a few functions while ignoring the rest (just as most of us use a limited number of mostly single-purpose apps):

In this, the younger generation is no different than their predecessors. Survival is the threshold they achieve with most of their technology. That their milieu demands facility with a few more social apps sooner does not change the fundamental fact that using technology properly is a collection of acquired skills, not some innate talent that Lamarckian evolution bestowed on those under the age of 30. Expecting them to automatically know how to use complicated technology because of their familiarity with basic technology is like expecting them to automatically know how to prepare a gourmet meal because they know how to cook a Hot Pocket in the microwave.

Digital native first entered the popular lexicon in 2001. In an article entitled Digital Natives, Digital Immigrants, education consultant Marc Prensky explained that “Our students today are all ‘native speakers’ of the digital language of computers, video games and the Internet.” In many respects, the article looks terrible 14 years later. Prensky, for example, speaks glowingly of Digital Natives’ ability to multi-task, despite the fact that contemporary research (subsequently extended and validated) had already demonstrated the opposite. But just because it suffers from some poor assumptions and hyperbole (Prensky claims we’ve already arrived at a singularity) does not mean the thesis needs to be totally rejected.

You can reconcile (i) a belief that, on average, younger generations are more accustomed to technology (and the rapid evolution thereof) than their predecessors with (ii) the recognition that this comfort does not automatically translate into proficiency. Indeed, in his seminal article, Prensky talks about the need to train a bunch of Digital Natives on a CAD program that “contained hundreds of new buttons, options and approaches.” Prensky takes pride in efforts to gamify the training and “create a series of graded tasks into which the skills to be learned were embedded.” For Prensky, this approach translated training into the language of the Digital Native. He reports that the main impediment was the reluctance of the Digital Immigrant professors to adjust their pedagogical approach. But even Prensky realized that the Digital Natives still needed training (competence-based assessments paired with synchronous, active learning).

Prensky’s article was anecdotal, not empirical. It did not address the fact that most of what Digital Natives did with technology was related to consumption, not application. It also ignored the inconvenient fact that it was Digital Immigrants who had invented the technologies on which the Digital Natives relied. And it introduced a term that conflates general familiarity with specific facility. A decade later, the London School of Economics would publish a paper entitled Digital natives: where is the evidence? The paper concluded that there was no real evidence of fundamental differences between generations. What differences existed were best explained by “breadth of use, experience, self-efficacy and education.” Or as another academic paper would find:

Young people’s engagements with digital technologies are varied and often unspectacular – in stark contrast to popular portrayals of the digital native. As such, the paper highlights a misplaced technological and biological determinism that underpins current portrayals of children, young people and digital technology.

Subsequently, the Organization of Economically Developed Countries (OECD) ran an international study. Instead of asking people about their general comfort with technology, the study asked them to actually solve basic problems using technology (again, a competence-based assessment). Millennials did not fare well. In the words of The Washington Post, “U.S. millennials performed horribly.” Or, as Fortune summarized it:

We hear about the superior tech savvy of people born after 1980 so often that we tend to assume it must be true. But is it?

…. It turns out, says a new report, that Millennials in the U.S. fall short when it comes to the skills employers want most: literacy (including the ability to follow simple instructions), practical math, and — hold on to your hat — a category called “problem-solving in technology-rich environments.”

The advocacy group Change the Equation put out a related report about the High Cost of Low Technology Skills, which included the following graphics:

Digital natives are not at fault for the fact that comfort does not automatically translate into skill. The myths surrounding the digital native, however, have done them a disservice. The belief that they already know everything about technology has convinced us and them that they do not need training in technology. As discussed in the last post, the notion that they are already tech savvy introduces barriers associated with metacognition and mindset.

Metacognition is thinking about thinking. The related concept of metaignorance is ignorance about our own ignorance. We don’t know what we don’t know. Not recognizing how incompetent we are results in unfounded confidence in our own skill level. For this reason, the people most in need of training are the least likely to recognize it. Moreover, confidence begets ego. Those who have a high opinion of themselves are the least inclined to admit facts that undermine their self image.

The problems of ego are compounded by mindset. People with a fixed mindset believe that cognitive traits are stable. You are either smart or you aren’t in the same way that you are either tall or you aren’t. You are born with it, or you are not. To the extent tech acumen is treated as product of age, it will be approached with a fixed mindset, which means that people will try to hide their deficiencies rather than recognize and remedy them.

Because we take a fixed mindset approach, we tend to act as if neither older professionals nor younger professionals will benefit from training. The former because they lack the capacity. The latter because they lack the need. We are wrong on both counts.

I am not trying to pick on anyone. I understand why both older and younger generations buy into the myths surrounding digital natives. But they are myths with pernicious consequences. Many of the decisions about training (or lack thereof) made at law schools and legal employers rest on an illusory foundation. It isn’t always what we don’t know that gives us trouble, it’s often what we know that ain’t so. 

Part 2 (originally posted separately)

90% of people don’t know how to use CTRL+F to find a word in a document or web page. Instead, they search the old-fashioned way, manually skimming the text.

This preponderance of ignorance is stupefying to me. But I want to be very clear that I am using the word “ignorance” in its most neutral form–i.e., lack of information or knowledge–rather than to convey any judgment or pejorative connotation. Ignorance is unavoidable. The only settled part of the debate as to who was the last person to know everything is that the person is long dead.

The curse of ignorance is that you don’t know what you don’t know. Previous posts have touched on this obstacle of metacognition, and our ignorance of our own ignorance. But there is another side of the coin: the curse of knowledge. The curse of knowledge is that once we know something, it is really hard to imagine not knowing it. This incapacity undermines communication and, especially, instruction because of the lack of shared information and assumptions. If I, for example, were going to put together some tips on internet research, I doubt that, absent the article cited above, I would have thought to include CTRL+F. I would have assumed that most everyone already knew it. I would have been wrong.

Indeed, I am a posterchild for both curses. I’ve told the story many times that my inflection point in using technology involved a client discovering that I printed and scanned to create PDFs. But how was I supposed to know what I didn’t know–there’s an app for that–without already knowing it? Yet, several years later, I delegated a task where one of the steps involved converting a large volume of documents into PDF. I was shocked (shocked!!!) to find that the person was spending hours scanning and printing. I assumed that because I knew how to convert a file to PDF, they knew it, too, despite the fact that I had been Exhibit A that this was not knowledge everyone possessed.

Thus, whether we know something or not, we too often assume that others know it. The tech-averse frequently fall into the trap of thinking the tech-comfortable know everything there is to know about tech (i.e., they can’t tell the difference between someone who knows slight more than them and someone who knows infinitely more than them). And those who know tech sometimes assume that their tech knowledge is widely shared.

Both curses are reason that competence-based assessments are such excellent training tools. Figuring out what people do and do not know is superior to speculation. But assessments alone are not enough. The primary objective of identifying gaps is to tailor the training to fill them. In this regard, I have been an abject failure in speaking to law school classes.

I speak to law school classes for free. I provide them a copy of my Legal Technology Assessment (“LTA”) for free. I then provide a copy of the LTA Training Edition (which pairs the competence-based assessment with synchronous, active learning) for free. Finally, they can retake the LTA (for free). Not only do have the opportunity to address identified deficiencies in their skill set but a qualifying score is also something they can add to the bottom of their resume to replace the meaningless “proficient in MS Office.” After speaking to hundreds upon hundreds of students, I’ve had exactly zero take me up on my full offer.

The class I wrote about last week is representative. Twelve students took the LTA because it was a class assignment. The results (below) were bad, as usual. I spoke to them for 40 minutes and offered the Training Edition to anyone who wanted it. Only two of the twelve emailed to ask for the Training Edition. And, if history is any guide, neither of them will return to take and pass the LTA.

In approaching these classes, my idea is that taking the LTA beforehand will puncture delusions of adequacy. We won’t get bogged down in an abstract conversation about how fluent they are with technology. 32% correct on some fairly simple Word tasks leaves little room for debate:

Pretty bad but not unexpected. As I try to communicate to them, it is not their fault. Everyone just assumes that they know things that they had no way of knowing absent training. They are not stupid, lazy, or untalented. They are smart, hard working, and full of promise. They simply lack training in one particular area that has the potential to make their lives better.

On the issue of their immediate future, I point out that their most recent predecessors are miserable human beings. In fact, the students are auditioning for the unhappiest job in America.

I then try to persuade them that technology plays a role in this dissatisfaction. Before technology takes our jobs, it can make them easier. At least, in theory. The technology has to actually be good, and we have to use it correctly. Otherwise, it is a source of frustration rather than leverage. Technology initially substitutes for labor at the most severe pain points. Machines can reduce the hours spent reviewing, proofing, conforming, collating, updating, and otherwise fiddling around the edges of the substantive legal work. Using technology well can improve both speed and accuracy, as I try to convey in the video below, and thereby alleviate a fair amount of the agony associated with being a young lawyer:

My contention is that having the right technology and learning to use it correctly will permit legal professionals to reduce the amount of their finite time and attention that is directed towards misery-inducing busywork. I’ve added to my spiel some recent confirmation of this theory from the cover story of last month’s American Lawyer. AmLaw’s annual associate satisfaction survey found that technology, including technology training, has a material effect on satisfaction:

One unsung key to retention could be technology. We found that overall satisfaction of midlevel associates, as measured on our survey, was strongly statistically correlated to their law firm’s scores on four questions involving technology. (The questions ask respondents to rate their firms’ technology generally, as well as technology training, support and use of technology in meeting client needs.) 

….In fact, eight of the top 11 firms in the national satisfaction rankings also were at the top on the technology questions. Conversely, many of the firms that occupy the bottom of the national satisfaction rankings also place low in the technology survey.

The AmLaw conclusions comport with an earlier study I cite from the National Conference of Bar Examiners that surveyed recent law graduates about the most important skills for young lawyersOut of 30 skills, using basic office technology ranked 6th:

Seeing basic office technology ahead of legal reasoning is a bit jarring, even for me. But the incongruence is heightened by the fact that, unlike the rest of the skills listed above, using technology is not taught in most law schools (or, generally, in most colleges or high schools). 
Then again, the idea that law school is not geared towards turning out practice-ready lawyers is well-worn territory. As discussed in Mark’s previous post, a LexisNexis survey found that “95% of hiring partners and associates believe that recently graduated law students lack key practical skills.” The dissatisfaction of associates is mirrored (and, maybe partially driven) by the dissatisfaction with associates. This is not just abstract griping. Anecdotally, partners report writing off massive amounts of associate time for perceived inefficiency. These claims appear to be borne out by the Georgetown Law and Peer Monitor realization data (which I dug into here):
 

So that’s my story. You’re great. You just haven’t gotten the training you need in technology. This training will benefit you directly in the form of improved satisfaction and performance. Here it is, for free. Followed by crickets.

I’m not quite sure how to interpret my utter inability to make any progress with these students (thankfully, the people who actually pay me are considerably more engaged). Am I, yet again, suffering from the curse of knowledge? Is there some assumption that I am making about these students that is impeding communication? As I try to put myself in their shoes, I increasingly come to conclusion that there isn’t anything I can say.

In general, it is challenging to get anyone to use their precious spare time to buckle down and really learn something new, even if they are persuaded that they should. The last time I decided to tackle a new area of study, I felt compelled to pay for online courses that included tests and graded assignments. I needed real stakes and real structure to have the discipline to systematically engage with the material (all of which I could have found for free on the internet). Here, the students took the LTA as a diagnostic because it was an assignment, and, I have no doubt, that they would have trained for and passed the LTA if that were also assigned. As a law student, I suspect I would have behaved much the same way (I know my scores would have been just as bad).

Stakes and structure matter. These students have had both all their life. From speaking to them, I get the sense that they believe this will continue. They believe that law school is designed to prepare them for law practice. They believe that whatever they do not know upon leaving law school, their firms will teach them. And, more than anything, they believe that they do not need to worry about this tech stuff because they will have secretaries to do it for them. More on that last point in my next post.

For me, the primary myth of the digital native is that, by virtue of their age, they already know what they need to know with respect to using technology. The corollary myth is that which they do not already know is not worth learning. But there exists a softer formulation that hits much closer to the truth. Rather than automatically knowing that which they need to know with respect to technology, we (and they) tend to believe that people who grew up with technology have the capacity to learn it and will do so when the situation requires.

The older generations seem to think that the situation will somehow mandate the acquisition of new skills. In this, they are not totally wrong. Most people, including the older generations themselves (with their fancy new iPhones and Surface Pro 4’s), learn what they need to learn to get by with technology. Some people learn more. But most satisfy the bare threshold of survival. This results in massive underutilization of extant technology. And study after study has shown that younger generations are the same as their predecessors in this regard–i.e., learn the minimum to get by.

The younger generations, on the other hand, think that they will quite literally be required to learn it. Someone in a position of authority is going to lay out a curriculum, objectives, and a timeline. At that point, they will do what they’ve always done: work hard to meet the expectations set for them. A few will fall short. Some will excel. But most will quite effectively do what they are asked to do. I, for one, think we ought to oblige them.

At some point, I will dig deep into my data. But, on average, people (lawyers and staff) in practice outperform the kids in school on the LTA. In part, this reflects a general raising of the baseline as the skill set required for bare survival expands upon entering the professional workforce. But there is still significant interorganizational and intraorganizational variance.

The variance between organizations appears to be entirely attributable to mandatory training. Different organizations have different attitudes towards training (is it available, is it mandatory, does it include competence-based assessments) that, unsurprisingly, have an appreciable impact on how well trained their employees are.

The variance within organizations stems from outside training. Frequently, I learn that the person who well outpaced her colleagues on a diagnostic assessment had some previous career that demanded a more robust technology skill set. Sometimes, I meet people who, like me, had some sort of rude awakening and decided they did not like being embarrassed. Every now and then, I encounter a true tech geek (meant with love and affection) who happens to also work in law.

My own data reinforces previous empirical empirical findings that, rather than age, facility with technology is a product of “breadth of use, experience, self-efficacy and education.” Technology training is important for everyone, including the digital natives. I just wish I could convince them of that.

++++++++++++++++++++++++++++++++++++

Casey Flaherty is the founder of Procertas. He is a lawyer, consultant, writer, and speaker focused on achieving the right legal outcomes with the right people doing the right work the right way at the right price. Casey created the Service Delivery Review (f.k.a., the Legal Tech Audit), a strategic-sourcing tool that drives deeper supplier relationships by facilitating structured dialogue between law firms and clients. There is more than enough slack in the legal market for clients to get higher quality work at lower cost while law firms increase profits via improved realizations.
The premise of the Service Delivery Review is that with people and pricing in place, rigorous collaboration on process offers the real levers to drive continuous improvement. Proper collaboration means involving nontraditional stakeholders. A prime example is addressing the need for more training on existing technology. One obstacle is that traditional technology training methods are terribleCompetence-based assessments paired with synchronous, active learning offer a better path forward. Following these principles, Casey created the Legal Technology Assessment platform to reduce total training time, enhance training effectiveness, and deliver benchmarked results.

Connect with Casey on LinkedIn or follow him Twitter (@DCaseyF).