I had a chance to talk with Ravel Law’s CEO, Daniel Lewis last week about Ravel’s new analytical tool for US Courts. I’ve been doing less and less product reviews lately because Bob Ambrogi and Jean O’Grady do a much better job at reviewing new products than I, but Daniel and Ravel Law have been such proponents of law librarians and legal information professionals, that I thought I’d dust off my reviewing skills and have him walk me through the new tool.

In my opinion, content is still king when it comes to legal research, but analytical tools make deciphering all the content so much better, and help researchers find the relationships between issues that might otherwise go unnoticed. Ravel is introducing a new analytics platform today which identifies patterns in millions of court decisions to access the possible outcomes, and help the litigation researcher deduce the best arguments or actions to take in his or her individual case, based upon the way specific judges or courts previously ruled on similar issues. In simpler terms, it allows you to better know your Judge or Court.

Daniel Lewis walked me through examples of the tool, ranging from specific issues in front of individual courts and judges, to much more complicated and academic research of how broader issues are handled differently over time, or regions. From what I have seen, there is a lot of potential for practicing attorneys and research academics alike with the Court Analytics tool.

The image below shows the layout of the Court Analytics platform. There’s a lot more to see from the tool, and Jean O’Grady will present a webinar later today (1 PM ET) to demonstrate it.

The press release is listed below with more information.

Ravel Law Launches New Analytics for US Court System
First Platform to Offer Analytics for All Federal and State Courts
SAN FRANCISCO, CA – DECEMBER 5, 2016 – Ravel Law, a legal research and analytics platform, today announced the launch of Court Analytics, a first-of-its-kind offering that provides an unprecedented view into the caselaw and decisions of state and federal courts.
Ravel’s Court Analytics answers critical questions that litigators face in developing legal strategy. By analyzing millions of court opinions to identify patterns in language and case outcomes, Ravel empowers litigators to make data-driven decisions when comparing forums, assessing possible outcomes, and crafting briefs using the most important cases and rules. Complex projects that used to take hours or days of research can now be done in minutes, with answers that deliver richer intelligence and detail.
“Court Analytics offers law firms a truer understanding of how courts behave and how cases are tried. Attorneys can inform their strategy with objective insights about the cases, judges, rules and language that make each jurisdiction unique. The future of litigation will be different, and we’re already seeing changes – with top attorneys combining their art of lawyering with our science, to advance their arguments in the most effective way possible,” said Daniel Lewis, co-founder and chief executive officer of Ravel Law.
Court Analytics applies data science, natural language processing, and machine learning to evaluate millions of court decisions spanning hundreds of years from over 400 federal and state courts. Its features include:
·         Search and filter caselaw by court, 90+ motion types, keywords, and topics.
·         Predict possible outcomes by identifying how courts and judges have ruled on similar cases or or motion types in the past.
·         Uncover the key cases, standards, and language that make each court unique.
With Court Analytics, lawyers can take advantage of never-before-seen insights, such as:
·         Judge Susan Illston in the Northern District of California grants 60% of motions to dismiss, which makes her 14% more likely to grant than other judges in the district.
·         The Second Circuit is most likely to turn to the 9th Circuit for persuasive caselaw, and then to the 5th and 7th Circuits.
·         Measured by citations, Judge Richard Posner truly is the most influential judge on the 7th Circuit. One of Posner’s most widely cited decisions is Bjornson v. Astrue, an appeal from a district court decision affirming the denial of social security disability benefits by an adminis­trative law judge. The most important passage of that decision is on page 644, as it deals with the Administrative Law Judge’s “opaque boilerplate.”
·         The California Court of Appeals has ruled on more than 1,000 cases that deal with the right of privacy. The two most important precedential decisions the courts rely on in such cases are California Supreme Court cases: White v. Davis and Hill v. National Collegiate Athletic Association.
Court Analytics adds to Ravel’s analytical research suite, alongside the award-winning Judge Analytics tool, which identifies the rules, cases, and specific language that a judge commonly cites, and Case Analytics, which finds key passages within cases and shows how they are interpreted. Using the Ravel platform, attorneys can gain insights customized to the unique circumstances of their case at every step of their research process. All three features are available today (www.ravellaw.com) via paid subscription (for individuals and organizations).
Ravel’s subscription-based services are enhanced by the “Caselaw Access Project,” the company’s ongoing collaboration with Harvard Law School to digitize the school’s entire collection of U.S. caselaw, one of the largest collections of legal materials in the world. Through this project, millions of court decisions are being digitized and added to the Ravel platform. This database of American law serves as an underlying data set that people can search and view for free in Ravel, in addition to using Ravel’s paid technologies to derive insights.
Ravel will be hosting a launch webcast to share more details on Court Analytics on Monday, December 5, 2016, 11:00 AM PST (2 pm EST). Register here to learn more: https://attendee.gotowebinar.com/register/7457269768975514627?source=CMS
###
About Ravel Law

Ravel is a legal search, visualization, and analytics platform. Ravel empowers lawyers to do data-driven research, with analytics and interfaces that help them sift through vast amounts of legal information to find what matters. Established by lawyers in 2012, Ravel spun out of interdisciplinary work between Stanford University’s law school, computer science department, and d.school. Ravel is based in San Francisco, and is funded by New Enterprise Associates, North Bridge Venture Partners, Ulu Ventures, Experiment Fund, and Work-Bench.

One of the best features that Lex Machina provides for Intellectual Property attorneys is their increased accuracy of information pulled from PACER. The improvements that Lex Machina has made on Cause-of-Action (CoA) and Nature- of-Suit (NoS) codes entered into PACER make it an invaluable resource to clearly identify relevant matters and weed out irrelevant cases. By improving the data, Lex Machina reduces the “garbage in – garbage out” effect that exists in the PACER database.

Now Lex Machina has turned its focus on cleaning up another annoyance found in PACER data, as well as many of the other platforms that pull data from PACER. The Attorney Data Engine analyzes the PACER information and identifies the attorneys that are actually associated with the case, even if those attorneys do not show up on the attorney list via PACER.

I talked with Karl Harris, Vice-President of Products at Lex Machina, a couple weeks ago, and he gave me some insights on the new Attorney Data Engine, and how they are increasing the accuracy of identifying attorneys and law firms that are actually working on the cases filed through PACER. Karl mentioned that in New Jersey and Delaware, two very important states when it comes to Intellectual Property cases, only about 54% of the attorneys that work on the cases, actually show up in the PACER information. That means that nearly half of the attorneys are missing from the metadata produced by PACER. When accuracy is important, missing nearly half of the attorney names can cause quite a problem.

For those of us that ever put on a demo for an attorney of docket information, we know that one of the first questions the attorney asks is “can you find ‘X’ case, which I represented ‘Y’ client?” If you cannot find that information, the demo may as well end right there. Attorneys are issue spotters. If you cannot get accurate information, they will not trust that the product actually works.

With the new Lex Machina Attorney Data Engine, you should be able to find the attorney information, even if PACER missed it.

Here is an overview of the three components of the Attorney Data Engine:

  1. The PACER metadata itself: Every time Lex Machina crawls PACER data, they keep a historical record and can identify when attorneys are added or removed from a case over time. This makes the PACER data better by itself.
  2. Pro Hac Vice Extractor: Docket entries will mention when attorneys are added Pro Hac Vice to a case. Lex Machina also keeps a record of attorneys associated to law firms (over time.)
  3. Signature Block Analyzer: Lex Machina analyzes the documents attached to the docket entries and identifies the signature blocks for each attorney. Even if the attorney’s name doesn’t show up in the Docket entry, if they have a signature block, they are then associated with the case. 
Karl Harris states that the Attorney Data Engine makes Lex Machina “the best source for reliably figuring out which attorneys are involved in which cases.” 
It will be interesting to watch Lex Machina grow over the next couple of years, and to see how its new parent company, Lexis, assists in advancing its progress through access to additional data points. It is not a far jump to see how the Attorney Data Engine processes can be turned into a Company Data Engine using Lexis’ company information databases. Lexis has the content, and Lex Machina has the analytical resources to make that content better. It should make for some interesting results as the two companies learn how to adapt processes to the different products. 
Image [cc] Giulia Forsythe

There are very few legal research and analytic platforms that are truly unique, ground-breaking, advantage-giving resources. One of the newer products out there that does seem to fit this category is Lex Machina. Although I’m still not sure the proper pronunciation of the “Machina” (is it “Mah-CHEE-Na” or “Mak-IN-ah” or “Mah-KEEN-ah”?? ), it is one of the few products where I’ve heard lawyers from multiple firms say it is a “must have” as part of their IP litigation arsenal. In fact, when talking with an IP Lawyer at a conference once, that lawyer said “I was tired of getting my ___ kicked by a firm that was using it, so we had to bring it into our firm to level the playing field.”

[Note: the folks at Lex Machina set me straight, it’s pronounced “Mah-Kee-Nah”]

Products like Lex Machina, and Neota Logic are really just scratching the surface of what data analytics and logic-based processing can do to help better position attorneys in understanding, planning, and overall strategy of handling legal matters. 

Lex Machina is launching a new tool, and is presenting a webinar today at 2:00 PM Eastern, that describes the Legal Analytics platform and how law firm and in-house counsel can use this in their strategy of protecting and defending their IP resources. If you haven’t had a chance to look at Lex Machina before, this webinar might be a great place to start. I’ve put the webinar information and press release below.

Whether you call something like this big data analytics, or computer-based mining and logic, or adaptive learning resources, products like Lex Machina are the wave of the future. Leveraging huge amounts of data, computer processing, analytics, advanced algorithms, and human interaction is slowly creeping into the legal research market. What is really interesting is that these advancements are coming from the smaller companies, not the big ones. Now, whether they get acquired by the big players is yet to be seen, but probably inevitable. Not to worry, though. I’m sure there are others at Stanford, MIT, and in basements and garages that are working on the next big advancement in legal research and analytics. I, for one, look forward to seeing what’s next.   

Lex Machina Launches Custom Insights: Personalized Analytics for Unprecedented Insights into Cases, Motions, and Trends
New capabilities enable lawyers to design their own approach to crafting winning IP strategy

Menlo Park, November 12, 2014 – Lex Machina, creator of Legal Analytics®, today raises the bar for legal technology by introducing Custom Insights with the new release of its Legal Analytics platform. With traditional legal research tools, it’s difficult not only to find relevant cases, but also to glean key strategic insights, unless attorneys are willing to drill into each and every case. This is where Lex Machina gets started. Custom Insights helps attorneys surface strategic information from only those cases or motions they care about, quickly and easily. 

To mark the launch of these exciting new capabilities, Lex Machina will be hosting a webcast on November 13 to demonstrate how in-house and law firm counsel can leverage Custom Insights in their workflow.

“Since launching our platform in October last year, our engineers have worked closely with Lex Machina’s customers to take Legal Analytics to the next level,” said Josh Becker, CEO, Lex Machina.  “More than predefined charts and graphs, our customers wanted the flexibility to apply analytics to the cases and motions that matter to them. With Custom Insights we are delivering a groundbreaking capability that changes the business and practice of law.”

Lex Machina is introducing these new capabilities that provide attorneys with Custom Insights:

Case List Analyzer

The new Case List Analyzer puts lawyers in the driver’s seat by enabling them to select cases based on specific criteria and filter the results by case type, date range, court, judge, patent findings, and more. Available on every case list page, Case List Analyzer helps lawyers uncover strategic information and visualize trends – from how to approach a case, to how to litigate, or how to defend against legal action. With one click they can see trends and gain actionable insights across their case selection.

Case List Analyzer allows me to quickly compare judges, law firms, parties and patents, using the criteria I care about.  I can find a judge’s tendency to award damages of a specific type,” said Scott Hauser, Deputy GC Ruckus Wireless.  “Custom Insights enables me to craft winning IP strategy.”

Motion Metrics

This new feature identifies the docket events and documents connected to a specific motion, and offers Custom Insights into all activity that led to a court’s grant or denial of that motion. With Motion Metrics, attorneys can get Custom Insights for each motion chain within a case to analyze the performance of judges, or opposing counsel, and also compare motions across districts, judges, parties, and law firms. Attorneys are able to compare motion outcomes and select the strategy that has the highest probability of producing the desired results.  

Motion Metrics may reveal that a judge almost never grants a motion for summary judgment,” said Miriam Rivera, former Deputy GC at Google.  “The ability to see this information in an instant, not only saves a tremendous amount of time, but also helps me for the first time to quantify my ROI.”

About Lex Machina

Lex Machina is defining Legal Analytics, a new category of legal technology that revolutionizes how companies and law firms compete in the business and practice of law. Delivered as Software-as a-Service, Lex Machina creates structured data sets covering districts, judges, law firms, lawyers, parties, and patents, out of millions of pages of legal information. Legal Analytics allows law firms and companies, for the first time ever, to predict the behaviors and outcomes that different legal strategies will produce, enabling them to win cases and close business.

Lex Machina is used by companies such as Microsoft, Google, and eBay, and law firms like Wilson Sonsini, Fish & Richardson, and Fenwick & West. The company was created by experts at Stanford’s Computer Science Department and Law School. In 2014, Lex Machina was named one of the “Best New Legal Services” by readers of The Recorder, American Lawyer Media’s San Francisco newspaper.
Gendarmenmarkt - Glaskugel
Image [cc] delpax

Many of you may have seen Jeff Bezos’ interview this weekend on 60 Minutes where he discussed the next phase of Amazon’s delivery process is looking to using drones to deliver products and remove third-party delivery services altogether. It is an interesting concept, and one that focuses again on streamlining the process of moving physical products from the manufacturing process to the consumer in the most efficient (cheap and fast) way. This will put a scare in the UPS and Postal Service delivery people. I guess the next progression in the Amazon process will be to sell consumers devices that will simply manufacture the desired product directly in the consumer’s home. That will scare the Foxconn folks. Like it or not, the whole business model is to remove the human from the process as much as possible, and link the customer with the product as quickly, often, and cheaply as possible.

Information Professionals like Librarians, Analysts, and Researchers are very familiar with this process because the information we deal with everyday has already gone through a transition that Bezos is attempting to do with physical products. So, for us, what is the next step? What is the equivalent to Amazon’s drone that will be the next driver in information delivery? I think the answer to this question lies in the advancements we see taking place in e-discovery products, specifically in the predictive coding methods that are becoming common place in that market, and using those tools along with research savvy to create products/results that are much more analytical.

For Information Professionals, we will need to start pivoting away from being the ad hoc researcher and shift over to more predictive analysis processes. In fact, as I was drafting this post, I serendipitously received an invite to a training conference specifically on “Predictive Analytics & Business Insights.” Within the description of the training are specific concepts that we should begin assembling into our customer service models:

  • Predictive Analysis – harnessing predictive capabilities to optimize business operations and develop an analytics culture
  • Risk Analysis – leveraging the wealth of organizational data, in real time, to predict risk and respond accordingly
  • Marketing Analytics – how analytics impact and optimize marketing planning, operations and performance
  • Customer Analytics – customer-driven analytics that encourage innovation, enhance engagement capabilities, enhance retention and loyalty, and promote growth

The past twenty-five years of information has produced a process that has removed much of the human from the process. Lawyers directly receive their information from the producers of that information. The Information Professional’s job has been more and more of vetting all of the different products out there and helping to make sense of it all. The value has shifted away from our ability to find the golden nugget of information within a mountain of data, and has been focused on getting our customers access to the best products, in the most efficient manner, all while maintaining costs. This will continue to be a valuable service, but we have a strong talent pool of researchers, analysts that we need to transition away from the traditional research model and over to areas of predictive analytics to help drive new business to the firm.

Predictive Analysis may not be as cool or flashy as Amazon’s drone idea, but for those of us looking for the next big idea in the Informational Professional market, this may be what helps keep us relevant in a market that is needing us to step up and fill this need.

Earlier this week, I had the honour of attending the SLA (Special Library Association – for the non-librarian 3 Geeks crowd) Annual Conference in Chicago.  As one of the competitive intelligence guest Geeks, it should come as no surprise that I mostly spent my time at the CI Division sessions.  Each CI session was was better than the last. Topics ranged from Cross Cultural CI (how to get information, collaborate, and understand various cultures when doing CI) to the Evolving Role of CI in Law Firms (a session I would imagine would be of particular interest to the 3 Geeks subscribers).  Sessional information, including presentations, will be available here in the next few weeks.

In the mean time, I wanted to share a few highlights in no particular order:

·    Special librarianship is alive and well. We keep hearing about layoffs, decreasing budgets, shrinking floor space and such across libraries.  That all may be well and true, BUT the rallying power of librarians is astounding.  New titles are popping up, new roles are being created, and librarians are leading that change. The energy is amazing.

·    Big data is coming to a library/information centre/shop near you.  Get in front of it.  Start thinking about how you can help your clients manage and understand their data.  We need to think beyond taxonomy, even beyond knowledge management, to something tangible and actionable.  Internal and external data are the keys to AFAs, increased productivity, and competitive advantage.  Harnessing big data will be the differentiator for law firms in the next several years.  Librarians with their information knowhow need to be involved in this process in firms.

·    Get embedded.  We’ve heard it before and it is starting to stick.  Building relationships from the inside is central to garnering trust, and being able to provide proactive information. Even if your firm doesn’t operate on an embedded basis, keep your pjs nearby, and invite yourself over to the slumber party when the opportunity presents.  Figuratively, of course.
·    Librarians can do CI and can do it VERY well.  Wednesday morning, the last day of the conference, at 8 a.m. no less (following the famous IT Division Dow Jones Sponsored Dance Party), Michel Bernaiche of Aurora WDC and Chairman of the Board at SCIP and Fred Wergeles, a SCIP fellow, presented a workshop type session on analytical tools that deliver value.  The librarians/attendees learned a couple of new tools for analysis, but the real learning came when the participants blew the panelists away with their ability to digest, synthesize and pull out the actionable tidbits of information – i.e. intelligence.  Don’t be afraid of doing CI, embrace it.
·    Social media is not a trend.  For several years now, there have been sessions at all manner of professional development conferences related to social media.  SLA 2012 was no different.  Social media, whether you personally engage or not, is not going away.  And if you don’t believe me, check out the twitter stream #slachicago from the past week.  Or check LinkedIn updates or Facebook statuses.   Like it or not, if you don’t yet have a strategy to deal with it, get one.  There may even be an app for that.
Roles are changing, responsibilities may be increasing, and the data sources are becoming more layered and hard to control. But, whoa is not (or shouldn’t be!) you.   It is an exciting time for law librarians/information professionals/law firm CI practitioners. 

If you have a service that gets 20 million unique users a month, and you have personal information on those users, what could you do with it? For the staffers at music streaming site, Grooveshark, the answer was to build a free tool that shows the demographics, culture and lifestyles, habits, and product preferences of those users that listen to particular artist. This morning, Grooveshark launched a data analytics resource called Beluga. This resource allows you to type in the name of a musical artist and find out information about the fans that listen to their music. The press release that Grooveshark wrote this morning says that the information is designed to help artist better position themselves to their fanbase (it could also be related to the fact that all four major record labels are suing Grooveshark… but, we’ll stick to what Beluga does for this post.)

Grooveshark’s co-founder, Josh Greenberg believes that by exposing these analytics, artists can “learn about their fans, route their tours, sell merchandise, work on building a following, and take their careers to the next level.” He thinks there is also value from the sales and marketing aspects of the record label as well since they can position themselves “to partner with artists who connect with their target audience, presenting endless opportunities.”

If you’ve used Grooveshark in the past, you know that from time to time they ask you to fill out a quick survey. It seems that we are seeing the first of the results of those surveys, and it makes for some interesting information when looking at musical groups. Here’s a sample of results I got when I searched on Eric Clapton:

  • 55+year old females are moderately over-represented (and yes, they look Wonderful Tonight)…
  • Fans of Clapton seemed to have retired in the past year (remembering Days of Old)…
  • Israelis seem to love themselves some slow-hand
  • Clapton users seem to show some Old Love and rent their movies from Blockbuster online…
  • Poor people don’t really like to listen to Clapton (no Hard Times for Clapton fans)…
  • Clapton listeners tend to bank at Citibank (cause, Nobody Knows You When You’re Down and Out)…
  • Apparently, Clapton sounds best when you’re driving and Traveling Alone.
What is the accuracy of these survey results?? I’m not sure. I guess the case could be made for not giving too much faith in the results you get from a voluntary web survey. However, I did look at the survey results for some smaller named bands that I like, and the surveys seemed to fall about where I’d expect for these bands, although the geography preferences tended to trend outside the US (maybe because Americans are less likely to fill out the survey??) 
Regardless of whether or not you trust the results, if you love to see examples of what can be done with large amounts of data, then you’ll find yourself having some fun looking at Beluga

Some good friends of mine sent me a link to an interesting blog post about Competitive Intelligence (CI). The post, written by Eric Garland, is a fascinating interpretation of not only the process of CI but the way CI is received by the decision makers. The author seems frustrated that the intelligence and analysis are not always received openly and without prejudice. In fact, this overlooks the basic truth that every organization and decision maker has its own set of cultural biases that will form their basis of interpretation.
It may be a common occurance for organizations and decision makers to prefer to hear intelligence that only reinforces their beliefs. This is the equivalent of (and just as useful as) a three year-old covering their ears and shouting lalalalalala. But making a decision and then only listening to intelligence that supports it does not translate into success. I must say that I haven’t experienced that in almost 20 years performing CI in both the corporate and law firm environments. Whether the intelligence was bad or good, it was something they knew they needed to hear. I like to think that their heightened instinct to survive outweighed their need to be right.
I feel that good CI just is. It is neither good or bad. Good CI’s purpose is not to stroke egos or present a fantasy world where all is sweetness and light. Its purpose is to obtain and analyze data and then present it in an easily understood format so that decisions are made with context, not in a vacuum. A CI Analyst inteprets that data using only the firm’s goals and culture as well as the unique qualifications of the recipient to paint a picture that says “Here’s where we are, there is where we want to be and these are the challenges we need to account for in our planning.” Doing less by omitting important data or telling them (both organizations and decision makers) what they want to hear reduces the value of CI significantly and will only set up the organization for failure. Organizations thrive or wither based on their willingness to deal with the world as it is rather than how they would prefer it to be. Presenting the real world so that organizations will be prepared to succeed is what CI is all about. How or if it’s used should not affect the Analyst’s work in the least.

My recent series on Staying Relevant suggested bar associations as a potential source of innovation and change in the industry. On the heels of that suggestion, the bar association of clients (a.k.a. the ACC) is announcing their new Contract Advisor system, developed in partnership with KIIAC.
Most readers know the 3 Geeks are fans of KIIAC as a next-generation KM tool. Well Kingsley has really out-done himself with this partnership. This new member benefit basically brings the full power of analysis KM right in to the hands of ACC members. Starting today, members will have access to ten document templates, which are contracts and agreements that in-house counsel would use on a regular basis. Kingsley has taken volumes of sample documents for each document type, analyzed them and made standardized versions available. Not only that, members can view alternative clauses within each document, enabling them to custom build documents, utilizing any clause components they chose. Finally, members can take documents they have received and compare them against the standard document types to see if any clauses are missing or highly variable from the standard.
As I understand it, new document types will be added over time, increasing the value of the service.
This is truly a next-generation, change-enabling member benefit. This is a quantum leap ahead of how things are currently being done. These are not forms being occasionally updated by someone you don’t know. This is dynamic content based on a wealth of knowledge that evolves as it is used.
Normally I would say check it out, but you need to belong to ACC for that to happen. So maybe I should say, join the ACC. Or maybe talk to your bar association and get them to follow suit.
Well done ACC and KIIAC.

Last night, I had the honour (that is spelled correctly, check Greg’s posting on the CLawbies for more detail) of participating in “Using KITs+1™ in Boosting Your Organization’s Analytical Fitness™ ” presented by Dr. Craig S. Fleisher, Chief Learning Officer/Aurora WDC, to a joint audience of members from the Toronto Chapters of Strategic and Competitive Intelligence Professionals and Special Library Association . This was the first time Dr. Fleisher presented this material, and I can promise you it won’t, nor should it be, his last.

The talk was about the next generation of intelligence analysis – analysis 2.0 if you will and our analytic fitness levels. I won’t recap the entire presentation because I would likely mash it up horribly, but here are five take-aways, musings and thoughts on competitive intelligence (CI) and analysis 2.0 that I am still pondering (note the italics) today. The ability to provide good CI and to really know what your clients (lawyers or otherwise) need is predicated on trust. Studies have shown that it takes 7.3 years to build the kind of trust necessary to be seen as the kind of “trusted business advisor” our clients expect. Um… I knew it took a year to get your “legs” in a firm, but 6.3 more to be trusted?? 

In the last several years, we have seen a significant increase in our data storage capabilities. I carry a 3 gig thumb drive on a key ring for example, but all this data we carry around and have access to has a very short half-life. Why are we storing it beyond its shelf life and/or not using it sooner? 

Analysis 2.0 is about dialogue and discussion – think crowd sourced analysis. This necessary means we, as CI practitioners, will have to recognize our own blind spots and prejudices, right? 

The current CI cycle, no matter how many steps you include, always involves the definition of an issue or a Key Intelligence Topic (KIT), Data Gathering, then Analysis, etc. In the next generation of analysis, we will need to provide real time analytics. Can we do various steps concurrently? Perhaps the answer to real time analysis is more in keeping with the scientific method of stating a hypothesis and then collecting the right data to prove the point – is that CI or just cheating? 

CI practitioners need to develop or maintain a sense of humility about intelligence and recognize that sometimes you will be wrong, and that’s okay. This goes hand in hand with being a trusted advisor. Think of it like the weather man (or woman). Each morning we trust our clothing choices to the forecasted weather. Often the weather person is right, sometimes not, but we rarely lose complete faith because the next morning there we are again, listening for the forecast and choosing outerwear based on what we hear. As the weather people, we have to know that we are trusted even if we are sometimes wrong. And sometimes as creators of intel, we have to realize that experience counts too. Sometimes you just have to step outside and feel the temperature yourself, right? Use your gut and be ok if you are wrong. 

Dr. Fleisher is a dynamic and exciting speaker – he encourages discussion and forces people to think about CI in ways we haven’t yet. He is pushing the envelope and creating new paradigms. Are your analysis skills ready for the workout?

In having a good laugh at sarcastic signs (Brilliantly smart-ass responses to completely well-meaning signs) this morning with my fellow Bradys, it occurred to me that the problem, or hilarious joke, depends on interpretation. If correctly interpreted, signs can indicate many things: where to walk, how to use a tool, or when there is danger. If misinterpreted, however, a sign becomes worse than useless – it’s a joke.

This is a common situation for many businesses, including law firms. There is plenty of data out there; some might say too much. (see next week’s Elephant Post question) But it’s not enough to have signs, or data. You must apply a correct interpretation.
You collect data about matters, fees, and clients. You perform research on markets, macroeconomic conditions, and industries worldwide. But, unless you can use that information to match Data 1 to Data 2 and correctly determine that Data 3 is the resulting value (and not Data 4 or Data Orange), all of your information cannot become intelligence.
You end up with “applaud the jellyfish”. Or, “push button receive bacon.” Which, although very funny, will not help you dry your hands.