01/13/2015

Analytics For Child Protection

The Chicago Tribune recently ran a series describing the cycle of violence in Illinois residential treatment centers for children.   The irony of the series was children were placed in facilities to be protected from violence and then became victims of violence.

Stories like these are filed under a few usual categories:  improper ‘oversight’ of bad actor facilities, underfunding of mental health and child welfare, or simply the inevitable result of an intractable set of social problems.   These are certainly reasonable explanations.  But  the budgets of the agencies responsible for these programs are huge, billions.  There appears to be improper oversight, but the articles pointed out the tens of millions that are currently spent on oversight.  

What if these situations are a symptom of a simple managerial failure to look at the right data at the right time?  Or just not having the right operational model to attack the problem? What if everything needed to deal with this issue was very much available? Is this an oversimplification of a complex and systemic issue? Most likely.

But what if we knew which residents of an institution were likely to be violent, what children were at risk of becoming victims, and which institutions were dangerous?

What if we knew this information in real time?  If we did, then the problem becomes simply following a set of rules.   First, put no child with high victim risk in an institution where there are children who have a history of victimizing.  Second, place no child in a treatment center that has a recent history of violence.  Third, make reimbursement dependent on violent incident rates and offer monetary upside for centers that control violence.   Finally, respond to every violent incident report in real time and identify and eliminate the cause of the safety breakdown.   Was it too few staff? The wrong kind of staff?  Improper monitoring or too many violent kids?  The wrong mix of kids in the center?

No problem like this is easily solved.  There are issues here that depend on the larger foster care and mental health system that are fundamental.  But our guess is the citizens of Illinois would feel a whole lot better about kids who do end up in residential treatment if there was a proactive data-driven system in place to protect them.

The tools, both in terms of measurement expertise and technology, are easily available for this model of protection.  We know because Objective Arts builds this kind of technology and has worked with governments and academic experts for years on models for serving and protecting kids.

We believe its time to try something new.   These stories pop up in the Tribune every few years. The response is largely the same: bring in an expert academic or industry group, convene a panel, start inspecting centers, suspend a few of the particularly bad treatment centers, fire a few people, publish a report a year later, ….and wait for the same headline and horrific stories in another few years. 

1/13/2015

 

09/02/2014

Efficiently Measuring Health Care Risk

Measuring health care risk is a cornerstone of the Accountable Care Organization (ACO) model and of modern health care generally.  Citing a report by Chilmark Research, Clinical-Innovation.com recently wrote that " ..to succeed in a value-based model, healthcare organizations require tools to leverage clinical, claims and demographic data to improve care delivery and manage population risk."

Predicting risk is often difficult for at least three reasons.  First, you need the data to build an overall prediction model and you need data from an individual patient to run that model.  Second, you need a model that works for the outcome(s) of concern.  Third, you need to be able to easily run that model in your business operations.

It follows that the best possible model combines the lowest cost of data acquisition both for model building and model running; that the data when run through the model will provide a good prediction;  and that the generated prediction will be usable in managing health care populations.

Objective Arts has partnered with a group of economists who have studied prediction with these factors front and center.  They will soon publish a paper in Health Services Research outlining  a model that will satisfy requirements one and two above.   Perhaps the biggest problem in predicting health care risk is assembling a very unwieldy mass of claims data to build models and then run them for individual patients.  These authors have shown that you can eliminate this problem altogether and still do a relatively good job at prediction.  It turns out that self-reports are surprisingly good at predicting future health care risk as long as you know what to ask. The paper describes the statistics that went into deciding what to ask and determining the numerical impact each item has on the overall risk prediction.

Objective Arts provides the necessary software automation for an organization to easily integrate predictions into business flows.  We enable easy use of the model through Web Services or a lightweight user interface.  We also give an organization the ability to fire alerts and send messages based on a calculated risk level.  We provide  autogenerated reports and graphical widgets that give the manager and the practitioner easy insight.

If you would like to try the model out, please contact us at info@objectivearts.com.  We would be happy to show you how it works in real time.

05/07/2014

New study finds access to EHR in acute care situations may influence care given to patient

"Unlike medical records kept in paper charts, electronic health records (EHR) provide numerous access points to clinicians to review a patient's medical history. A new study has found access to electronic health records in acute care situations may influence the care given to that patient, and in some cases, failure to review the EHR could have adversely affected the medical management. The findings are reported in the May 2014 edition http://content.healthaffairs.org/content/33/5/800.abstract of Health Affairs. "

Wouldn't review of the EHR be one of the more critical items on any modern medical care checklist? Whats perhaps more interesting is that this is a finding. Does it reflect in some way a deep skepticism about a technology that you might test that it has no, or negative, influence on the primary decision use cases it was intended for? But unfortunately these questions exist. One does see studies that identify unintended consequences of an EHR and perhaps negative patient effects for some use cases.

04/28/2014

Operational Algorithms

Given the staggering quantities of data that confront us on a daily basis, the challenge becomes a matter of 1) choosing the best “algorithm” that will enable us to make the wisest decisions and 2) using the results of that algorithm to improve operations.

At Objective Arts we build software for deploying algorithms in business operations.   But what is an algorithm? We feel that one critical part of describing our software lies in clarifying terms.   The software industry is plagued with overloaded and vague terms, jargon, and ‘buzzwords’.  As a result, it has become all too easy to misunderstand and misconstrue concepts.  One example taken from recent media attention is the term “Big Data.”  Few people can clearly define what "Big Data" really means. Even among those who can define it, there is usually a great deal of discrepancy in their interpretations of the term. Ultimately, it seems that nobody can accurately define what "Big Data" really means.

"Algorithm" is also a buzzword of sorts.  Fortunately, the term does have some concrete definitions.  Wikipedia defines it as “a step-by-step list of directions that need to be followed to solve a problem”.  For computer scientists, this is an entire field of inquiry.  However, their focus is on efficiency of software code.  Another form of algorithm is the recipe a cook uses  Our emphasis is to employ numeric recipes to reduce uncertainty and risk  (see Douglas Hubbard’s insights in ‘How to Measure Anything’).  The fundamental problem Objective Arts is trying to solve is the lack of standardized data to make decisions.  Algorithms in our systems typically consist of numeric formulas or a series of “if this, then” statements.  These two entities can be combined and configured into very complex organizational advice recipes.  For your organization, the means to solving a “problem” becomes a matter of generating a series of algorithmic results that can reduce uncertainty at any level, be it for an individual practitioner, or for an entire organization.

At Objective Arts, we also focus on finding and employing those algorithms that have the greatest likelihood of adding value.  Given a choice between an untested algorithm devised by a person with a few years of experience, based on anecdotal data, and an evidence-based algorithm created by an academic researcher who has spent two decades researching and publishing an idea, we will nearly always employ the latter.  

But a key feature of our software is that the customer gets to select which algorithm(s) they would like to employ.  Our job is to provide the best mechanism to 1) Effectively and efficiently implement that algorithm through software, and perhaps more importantly to 2) Efficiently integrate that algorithm to solve business problems and make wise decisions.   Deriving “answers” or “advice,” though necessary, is not sufficient.   If software does not help a customer act efficiently and effectively on a large number of operational decisions, it is not doing its job.

 

01/14/2014

Value-Based Health Care

One of the important issues we think a lot about are the metrics and analytic tools that we think our customers will need to practice "Value-Based Health Care."  That is, health care based on the quality of the outputs as opposed to simply the volume of care.  

A recent Harvard Business Review blog post discusses these issues and lays out some recommendations.   These make a lot of sense as does the basic idea of paying for quality and outcomes and trying to conserve costs. 

But the data story is complicated.  First there is the issue of leveling the playing field in terms of outcomes.  High quality could be the result of low complexity of cases, has opposed to purely the best care.  High quality should also be significant, not random variation.   Positive change should have at a minimum some degree of statistical significance.  In addition, the system that measures quality should accurately reflect what it is measuring.  It needs to be properly calibrated and should not contain systematic error that overstates quality just by how a measurement is taken.

We think these are critical factors that both payers and providers need to take into account in their analytic strategies.  First, are they adjusting for complexity when judging relative outcomes?  Second, when they are measuring outcomes, are they seeing change that is, as best as can be determined, real and not the result of a random move?  Finally are the measurement tools in control.  That is, when the tools detect a change from ill to better, or from score 1 to score 10, is that change an accurate reflection of what really happened. 

In future posts we will talk about working with these factors in software and operations.

10/20/2013

Be Careful Who You Trust

We read a lot at OA.  For a business person reading is strength training.  The right books will challenge your existing beliefs and make you refine or discard them.   In the past few months there have been a few authors who really have challenged us:

In today's world there is a tendency for much of one's reading to consist of tweets, blog posts and web pages.  For the most part, this type of reading won't fundamentally change your way of thinking. A book with an original point of view, and in particular an irreverent point of view, will teach you something even if you can't get your arms around everything the author says.

What these books have in common is that their authors do not accept conventional wisdom or popular business icons.   We can't do justice to what these authors have said in detail so we advise you to read their work. One thing we can suggest right now is to discard the notion that a popular Twitter feed is where you can really learn important fundamental ideas.  Some things just don't change.  People who write books, big thoughtful books, still demand our attention whether or not they blog or tweet.  

These authors will fundamentally challenge whose wisdom you should trust.  The authors slaughter the sacred cows - McKinsey and Company, the Boston Consulting Group, the banking industry, the entire profession of academic economists, Tom Peters, Jim Collins, and anyone who thinks they can make reasonable predictions about the future.  

They give you a new appreciation for the hard thinking style of a philosopher, and the analytical skills of a careful statistician.  Understanding the fundamentals of probability and statistics is an incredibly important skill many of us don't apply very well.

We think these books can help with big decisions because they teach skepticism in the extreme. Accept nothing and follow your instincts when things don't make sense.   If something doesn't smell right think hard about the possibility that there is something seriously wrong no matter what the reputation is of the person or firm saying it.

 

 

09/11/2013

Data Driven = Safer Food

LINCOLN — Food safety regulations already mandate that companies in the food industry check daily for contaminants. But in the age of “big data,” a couple of University of Nebraska-Lincoln food safety researchers say merely logging the information on a spreadsheet isn't good enough.

via www.omaha.com

Excellent article on a data-driven approach to building a safer food supply.   This reminds one of the assembly line approach to quality control but the defect is some dangerous microbial.

09/03/2013

Amazon Continues Its March Into Enterprise Clouds ....

Amazon Continues Its March Into Enterprise Clouds And Wallets – ReadWrite

This article makes the following assertion quoted from a Gartner study:

  • "The road to increased cloud usage will be through tactical business solutions addressing specific problems, not through broad, strategic infrastructure replacements.
  • The business impact of cloud services increases as they continue to move up the cloud services value chain, from infrastructure services to business process services.
  • The introduction of cloud solutions will lead to a more diverse solution portfolio with widely varying implementation and migration timelines."

We could not have written a bettter description of the OA Cloud approach.  It definitely makes sense for an organization to realize the cost and security advantages of AWS as a pure infrastructure strategy.   But AWS enables another way in.  AWS, through all of its APIs, makes for a better business application and some very unique paths to custom applications that solve business problems faster and at lower cost. 

08/07/2013

Fraud and Measurement

A recent article on the CNN Website: Rehab racket: Frauds, felons and fakes strikes us as an example of a failure to measure.  But from reading the article one would come to the conclusion that it is about the failure to investigate.  We think investigation is an inefficient model for validating the delivery of a service. 

Why is measurement the right metaphor?   A measurement regime forces a provider to either justify a service event honestly or fraudulently.  Outright fraud can be very hard to stop given the creativity of those perpetrating the fraud.  But if you require the potential perpetrator to submit a measurement that justifies service delivery you up your odds of fraud detection.  A good measurement protocol tells a story.  That story will contain subtle clues as to the whether or not the story is true.  Computers are good at catching these situations efficiently and generating those red flags.  Every measurement protocol has an implicit normal pattern across its measures.  Some patterns are likely, some are highly unlikely.  For example, some items in a protocol may be highly correlated.  If the scores on either item diverge widely this is a red flag.  Someone who is entering many items fraudulently enters in a particular pattern - it may be too random, or have way too little variation to be real, or be too much like another submission.   Finally, in measurement systems when repeat measures are required, systematic inflation of time 1 to time 2 improvements also have a pattern that can be compared to a realistic or population-verified trajectory. 

But with just the submission of an invoice for services and some free text,  these patterns are much harder to catch.  Certainly it's possible,  but it is much more difficult and much less efficient.  The measurement model is certainly more efficient than sending investigators to hundreds of agencies.

07/17/2013

Big Skills and Small Data

A common theme for us is the relationship between what we call "Small Data" and extremely smart and accomplished people in their work environments.  Small Data is a limited set of data that has been extensively tested and refined and then applied to a specific problem.  Our intuition might tell us that using small data-driven strategies to attack a problem in the environment of the most skilled practioners would not make sense.  To use David Halberstam's terminology: the Best and the Brightest

A current description of this paradox is Atul Gawande's book The Checklist Manifesto.  In his chapter "The Hero in the Age of Checklists," Gawande describes the concept of "expert audacity."  We sometimes have the conception that heroic action always produces the best outcomes on its own.  Somehow the idea of a great surgeon using a 19 item checklist doesn't fit.  It just wouldn't work on an episode of Grey's Anatomy.  But in some fundamental way, as Malcolm Gladwell points out, it's about humility.  Perhaps really, really great experts, e.g. Gawande, get small data and big skills together.

Gawande explains it much better than we ever could but we see the same phenomenon all the time.  In some conversations with prospective customers we feel compelled ask the question "You're the best and the brightest. Why would you need to do that?" Sometimes, as Gawande puts it, even in high skill processes there is a significant need for the "regimentation" of Small Data.

But perhaps the most interesting example is in the prediction of violent behavior.  The conclusion of the authors of Violent Offenders: Appraising and Managing Risk is that clinical judgment in predicting recidivism is not only of no use when combined with an actuarial assessment, but actually makes for poorer decisions.  Stated differently, the most highly trained pyschologist or psychiatrist cannot hope to match the predictive power of a 12-item checklist designed with historical data as its backing.  Assuming that the creational process of the checklist effectively covers the population it is applied to,  actuarial instruments that collect a relatively small number of data elements but have extensive data behind them are surprisingly effective.

Perhaps there are additional examples of this phenomenon.  Relatively simple statistics applied to baseball and basketball can make the difference in the outcome of a game.  Industrial Quality Control also can rely on very simple applications of data to make complex production create higher quality products.

 

 

Contact Us

Thanks!