Category Archives: Analytics

Fortnightly Healthtech Update #7

Current Health (formerly Snap40) partners to add axillary temp and spirometry to the parameters it measures.

Detecting antibiotic levels in real-time.

Bioformis ups its game with FDA clearance for BioVitals to help manage chronic conditions.

MC10 partners with University of Rochester to collect real-world evidence for how people live with Parkinson’s. Reminds me a bit of this collaboration between IBM and Pfizer.

Interesting perspective that I hadn’t considered before: Dated US regulatory policies are endangering the US lead in healthcare technology.

Direct primary care (DPC) might help achieve half of the quadruple aim, clinician satisfaction and patient satisfaction. But lower healthcare costs overall, and higher quality overall, I’m not so sure. It’s also typically positioned as an up-sell to people who already have health insurance. But just thinking out loud, maybe there’s another angle. If you can’t afford a traditional insurance plan, would a low cost primary-care-only option with a strong focus on preventative care be better than nothing…?

Sounds like a lawsuit waiting to happen, Rensselaer County launches online emergency room for Medicaid patients. I’m sure the intent is good, to prevent unnecessary ER visits. But, pitching an app called “ER Anywhere” to a patient population might just be asking for trouble.

I expect to see more of this from large employers tired of rising healthcare premiums: Walmart to use Embold Health’s analytics to herd it’s employees to high quality providers.

Can a simplified cardiac risk score also predict strokes?

In case anyone was still in doubt, new research published in JAMA suggests 25% of all healthcare spending could be waste (free registration required). And that doesn’t include administration. Candidly, a primary care doc once told me the rule of thumb in primary care is 3 admins for every medical practitioner. Their practice at the time had a 5 to 1 ratio! I’m inclined to think that admin overhead is a direct result of the complexity of the payer/reimbursement model. And they say that government managed healthcare would be inefficient…

Best practice for in-hospital rapid response teams (RRT) include having a dedicated RRT, and the ability for anyone to trigger the RRT without fear of reprisal.

The only thing that surprises me about this is that patients still have to pay some of the cost: Devoted health to use Apple Watch with Medicare Advantage patients. I would be shocked if most payers/providers aren’t paying for wearables like this in a few years – and financially encouraging patients to use them. If they’re not, it probably means the push for value-based care has failed.

The ECRI Institute releases its top 10 technology hazards for 2020. Not for the first time, the problem of over-alarming makes the list.

In Pennsylvania a patient tragically expired in the ER after being left unattended. A potential market opportunity for wearables perhaps. But unfortunately, hard to make a business case for it that is going to make a hospital CFO jump for joy.

Funding for digital health startups has cooled a tad, but still an estimated $1.3bn in Q3 according to Rock Health.

Fortnightly Healthtech Update #4

I love hearing about startups in emerging nations – I think there is so much potential for cross pollination. Here are five from India focused on blood supply, preeclampsia, kidney health, ECG, and chronic disease detection, among other applications.

The NHS in England shows good results with a pilot for the early detection of sepsis, plans a nationwide roll-out.

A fortnight ago I wrote about Deepmind’s progress in detecting kidney failure. Here’s a bit of pushback, citing some challenges for AI in healthcare in general. This phrase jumped out at me: “(clinicians) also rely heavily on human judgment to diagnose…a level of subjectivity that would be nearly impossible to program into every AI algorithm”. It was well argued almost 10 years ago by very respected clinicians that removing some subjectivity and standardizing care processes (free registration required) would help deliver higher quality care at lower cost.  In other industries, we grow productivity by selectively substituting technology for labor. And we do that in healthcare too. In pockets. Just not enough yet to make high quality, lower cost care repeatable at scale. More on that a little further down the page….

Deepmind, on the other hand, is a half a billion dollar deep hole for Alphabet by the way…

Talking of pushback, Visibly has had to pull its mobile app eye test from the market. The American Optometric Association has apparently lobbied hard against Visibly. Taking a big step back, this might be something of a moral dilemma that we need to figure out over the next 10-20 years. The cost of healthcare is still rising. If that continues, an increasing number of people in the US are going to be priced out of conventional healthcare (i.e consulting a doctor, one on one). Telehealth could be an option.  But pushing the argument further, what happens when people can’t afford to see a doctor at all, even virtually. Are we really going to deny them the opportunity to have their health evaluated solely by an algorithm? Even if that offers a less comprehensive examination than a doctor. If the only options you can afford are diagnosis by algorithm, or no diagnosis at all, which would you choose…?

The American Heart Association is launching a pilot with LifePod Solutions to help care for people with chronic cardiac conditions at home. Also in the remote patient monitoring game, Biotricity reports strong quarter to quarter growth.

Rumors abound that all is not happy in the world of Apple health. But, that hasn’t stopped it expanding use of Apple Health Records to embrace Allscripts. And in not unexpected news, AliveCor has dropped its KardiaBand accessory for the Apple Watch. AliveCor has been pushing into more specialized territory with its 6-lead ECG anyway. But does a 6-lead ECG take AliveCor away from the direct-to-consumer (D2C) model…? I think it might. Meanwhile, if you do see anyone with both hands on a device – while simultaneously pressing it to their left knee or ankle –  you’ll know exactly what they’re doing

Things seem to be quite peachy at not for profit hospitals, with Mayo growing profits by almost 300%.

Researchers are working on a biosensor that uses interstitial fluid and might be used in place of blood draws in the future.

Biobeat gets FDA clearance for cuff-less non-invasive blood pressure. This has potential. First of all, cuff-less non-invasive BP should mean it’s more comfortable for the patient. That could be especially great in the home for chronic conditions. Second, in my experience there aren’t many wireless BP devices cleared for use in the hospital. There are two unknowns for me that are crucial to success down the road. First, does it fit easily into the clinicians workflow? If not, that’s going to make adoption in the hospital an uphill battle. And second, do the economics work? If it’s more expensive overall than existing approaches, it’s probably not going to fly, however much more comfortable it may be for patients.

In the land of mobile appointment bookings, doc’s choke on Zocdoc’s new pricing model.

I think Remedy Partners really needs this merger with Signify Health. Remedy Partners is arguably the most important player in Medicare bundled payment pilots. It “owns” more episodes of care than any other entity, and in some cases took on the financial risk instead of the providers. But, most of the cost variation in each bundle is driven by post-acute care (slide 16 onward). That is, what happens after the patient is discharged from the hospital. Without a good way to monitor patients post-discharge, that creates a huge financial risk for Remedy. This merger with Signify Health can help to fix that. 

Two chairs aims to take the stress out of finding a therapist.

Strong preventative healthcare is woefully lacking in the US. As Amy Brown of SpringBuk says, incentives for preventative care are not aligned. Honestly, I think there are limits to how much analytics can help with that. There are two massively fragmented, multi-trillion dollar industries here (payers and providers are two different industries, they just share a value chain). There’s a bigger piece here about preventative care that needs to come together in my head, I’ll try and get it written in the next week or two.

And last but not least, Isansys pilot shows that heart rate variability could predict 90 day mortality in patients with cirrhosis of the liver.

Machine learning for IIoT: 4 tips to get you started

The Industrial Internet of Things (IIoT) will bring large volumes of fast-moving data.  This brings both challenges and opportunities.  At the risk of stating the  obvious, one challenge is making sense of large complex data sets.  Machine learning approaches can help here, so I’ve got four tips for getting you started with machine learning:

1.  Forget some of what you know about analytics

If you plan to deal with IIoT data, you may need to refresh your thinking about analytics.  Historically, analytics have been a relatively simple and sedate affair.  For example, analysis was often performed on historical data at some point in time after it was generated.  In addition – for better or worse – analytics often mirrored the siloed nature of data.  That is, the integration of data was minimal.  Industrial IoT will bring more data, faster, from a greater variety of sources.  Managing this data complexity to be able to respond to events in a timely way will required a much more automated and frictionless approach to the analytics value chain.  Machine learning is one way to achieve that.  It can be especially powerful with complex data, where patterns are not obvious and it’s difficult – nay, impossible – for humans to formulate and code rules.  Unfortunately, the lack of transparent logic in machine learning can be an obstacle for some people that must be overcome.  Many engineers just aren’t comfortable with black-box solutions.  Tough, get over it.

2.  Explore machine learning as a technology

The cloud changes everything.  In this particular case, it demolishes the barriers to entry for machine learning.  A new generation of machine learning tools (from BigMLMicrosoftAmazon, and IBM for example) are cloud-based products.  Most offer a free trial, some for an indefinite time period.  They also offer a much more guided, tutorial-style development experience than the previous generations of software.  So what’s the cost to learn more and experiment with it…?  It’s your time.  At this point, extensive investigation of machine learning tools prior to selection isn’t strictly necessary.

Here’s how the evaluation process can work:

  • Pick a cloud-based machine learning tool; any one, it doesn’t really matter.
  • Spend a day or two playing with it.
  • If you like it, play some more.
  • If you don’t like it, pick another tool and start over using the experience you’ve already gained.

3.  Don’t be fooled – successful machine learning isn’t all data science

True enough, at a technical level, machine learning can appear enigmatic.  Seemingly without rules or logic, it can be daunting to try and understand the details.  But, that’s what IT professionals, analysts, and data scientists are for.  Like all successful IT projects, successful machine learning projects do not start and end with IT.  Business and domain expertise are crucial to success.  Consider the application of machine learning to maintenance.  Domain expertise is necessary to identify potential source data to feed the algorithms.  Further, domain expertise is required to interpret and provide context to the output of machine learning.  Like all successful IT projects, machine learning applications require a collaborative cross-functional team.

4.  Consider prescriptive maintenance applications

Many enterprises will be breaking new ground with IIoT applications.  It’s critical that the first wave of IIoT applications deliver a tangible and measurable return on investment.  Re-inventing the approach to asset maintenance provides a clear path to measurable benefits.  Research by ARC’s Ralph Rio shows that the most common approach to maintenance is still simple preventative maintenance.  And yet, as the same ARC research also shows, that is not the optimal approach for the majority of assets.  Maintenance applications that incorporate machine learning are a promising approach for capitalizing on Industrial IoT data.  The potential return on investment (ROI) in predictive maintenance is real, tangible, and relatively immediate – all good things you need in a beachhead project.

So those are my four tips – consider it my Christmas gift to you.  And no, you can’t take them back to the store for a refund if you don’t like them…

Shameless plug alert:  This and more in my exceedingly good value research report on machine learning for Industrial IoT.

(Originally published on industrial-iot.com, a blog by ARC Advisory Group analysts)

Two reasons machine learning is warming up for industrial companies

Machine learning isn’t new.  Expert systems were a strong research topic in the 1970’s and 1980’s and often embodied machine learning approaches.  Machine learning is a subset of predictive analytics, a subset that is highly automated, embedded, and self-modifying.  Currently, enthusiasm for machine learning is seeing a strong resurgence, with two factors driving that renewed interest:

Plentiful data.  It’s a popular adage with machine learning experts:  In the long run, a weaker algorithm with lots of training data will outperform a stronger algorithm with less training data.  That’s because machine learning algorithms naturally adapt to produce better results based on the data they are fed, and the feedback they receive.  And clearly, industry is entering an era of plentiful data. Data generated by the Industrial Internet of Things (IIoT) will ensure that.  However, on the personal / consumer side of things, that era has already arrived.  For example, in 2012 Google trained a machine learning algorithm to recognize cats by feeding it ten million images of cats.Today’s it’s relatively easy to find vast numbers of images, but in the 1980’s who had access to such an image library…?  Beyond perhaps a few shady government organizations, nobody.  For example, eighteen months ago Facebook reported that users were uploading 350 million images every day.  (Yes, you read that correctly, over a third of a billion images every day).  Consequently, the ability to find enough relevant training data for many applications is no longer a concern.  In fact, the concern may rapidly switch to how do you find the right, or best, training data – but that’s another story…

Lower Barriers to Entry.  The landscape of commercial software and solutions has been changed permanently by two major factors in the last decade or so:  Open source and the cloud.  Red Hat – twenty-two years old and counting – is the first company that provided enterprise software using an open source business model.  Other companies have followed Red Hat’s lead, although none have been as commercially successfully.  Typically, the enterprise commercial open source business model revolves around a no-fee version of a core software product – the Linux operating system in the case of Red Hat.  This is fully functional software, not a time–limited trial, for example.  However, although the core product is free, revenue is generated from a number of optional services, and potential product enhancements.  The key point of the open source model is this:  It makes evaluation and experimentation so much easier.  Literally anyone with an internet connection can download the product and start to use it.  This makes it easy to evaluate, distribute and propagate the software throughout the organization as desired.

Use of the cloud also significantly lowers the barriers to entry for anyone looking to explore machine learning.  In a similar way to the open source model, cloud-based solutions are very easy for potential customers to explore. Typically, this would just involve registering to create a free account on the provider’s website, and then starting to develop and evaluate applications. Usually, online training and educational materials are provided too.  The exact amount of “free” resources available varies depending on the vendor. Some may limit free evaluation to a certain period, such as thirty days.  Others may limit the number of machine learning models built, or how many times they can be executed, for free. At the extreme though, some providers will provide some limited form of machine learning capacity, free of charge, forever.

Like open source solutions, cloud-based solutions also make it easier – and reduce the risk – for organizations to get started with machine learning applications.  Just show up at the vendors website, register, and get started. Compare both the cloud and open source to to the traditionally licensed, on-premise installed software product. In this case, the purchase needs to be made, a license obtained, software downloaded and installed. A process that could, in many corporations, take weeks to achieve.  A process that may need to be repeated every time the machine learning application is deployed in a production environment…

My upcoming strategy report on machine learning will review a number of the horizontal machine learning tools and platforms available.  If you can’t wait for that to get started, simply type “machine learning” into your search engine of choice and you’re just 5 minutes away from getting started.

(Originally published on industrial-iot.com, a blog by ARC Advisory Group analysts)

Re-inventing Healthcare: Cutting Re-admission rates with predictive analytics

Managing unplanned re-admissions is a persistent and enduring problem for healthcare providers.  Analysis of Medicare claims from over a decade ago showed that over 19% of beneficiaries were re-admitted within 30 days.  Attention on this measure increased when the Affordable Care Act introduced penalties for excessive re-admits.  However, many hospitals – including those in South Florida and Texas – are losing millions in revenue because of their inability to meet performance targets.

Carolinas HealthCare System has applied predictive analytics to the problem, using Predixion Software and Premier Inc.  Essentially, by using patient and population data, Carolinas is able to calculate a more timely, more accurate assessment of the re-admit risk.  The hospital can then put in place a post-acute care plan to try and minimize the risk of re-admission.  You can find a brief ten minute webinar presented by the hospital here.  But, from an analytics, information management  and decision making perspective, here are the key points:

  • The risk assessment for readmission is now done before the patient examination, not after it. Making that assessment early means there is more time to plan for the most appropriate care after discharge.
  • The risk assessment is now more precise, accurate, and consistent.  In the past, the hospital just categorized patients into two buckets – high risk and low risk.  There are now four bands of risk so the care team can make a more nuanced assessment of risk and plan accordingly.  Further, the use of Predixion’s predictive analytics software means that far more variables can be considered to make the determination of risk.  Us puny human’s can only realistically work with a few variables well to make a decision.  Predictive analytics allowed more than 40 data points from the EMR, ED etc. to be used to make a more accurate assessment of risk.  Finally, calculating the risk using software meant that Carolinas could avoid any variability introduced by case managers with different experience and skills.
  • The risk assessment is constantly updated.  In practice, the re-admission risk for any individual patient is going to change throughout the care process in the hospital.  So, a patients re-admission risk is now recalculated and updated hourly – not just once at the time of admission which was situation in the past.
  • The overall accuracy of risk assessment gets better over time.  A software-centered approach means that suggested intervention plans can be built in – so again reducing variability in the quality of care.  But, the data-centric approach means that the efficacy of treatment plans can also be easily measured and adjusted over the long-term.

Overall, this data-driven approach to care is a win-win.  It results in higher care quality and better outcomes for the patient.  And Carolinas HealthCare System improves its financial performance too.  This is all possible because more of the risk assessment is now based on hard data, not intuition.

Visual Data Discovery: Eat Lunch, or Be Lunch…?

It’s time.  Already.

Monumental shifts in the software industry often follow a 3 phase pattern that inevitably leaves blood on the floor when the dust has settled:

  1. Cheeky young upstart enters the market with a great new idea
  2. Cheeky young upstart starts to rake in serious sales revenue
  3. Established vendors react to nullify the threat and protect their own revenues

Think Netscape and Microsoft. Or MySQL and Oracle – there are plenty of examples.

It’s almost hard to believe, but the still fledgling visual data discovery market is already entering stage 3.  A shake out is inevitable, and inevitably there will be blood on the floor.  The only question is, whose blood?

Of course, if I actually knew the answer to that I’d be a wealthy man. I don’t, and I’m not. But, there are definitely some interesting angles to explore and I’ll be doing that in a series of blogs over the next few months. For example:

  • Is Qliktech, one of the pioneering visual data discovery vendors, struggling, or merely consolidating before it pushes on to bigger and better things? Notably, in Q3 last year, Qlik grew it’s maintenance revenues by almost three times as much as licence revenues (33% vs. 12%).  The full year financial report is on February 20th. so I’ll be trying to get more insight from that.
  • Tableau are reporting their latest financials on February 4th. I love Tableau as a product, it’s just such fun to use. But as a company there are surely challenges ahead. Excellent though Tableau is at visual data discovery, it has no ambitions that I know of to provide a full portfolio of BI solutions. That will become a problem (see below).
  • And then, there are the older, long established BI vendors that have been in the reporting and/or dashboard game for many years:  SAP, Oracle, IBM Cognos, MicroStrategy and Information Builders just to name the biggest and most well known.  Now that vendors such as Qliktech, Tableau and TIBCO Spotfire have clearly shown the potential (measured in dollars) of a new class of BI tool, the established vendors all want a piece of the action too.  Hence the introduction of SAP Lumira, MicroStrategy Analytics Desktop etc. over the last 18 months.  The key question here is when will “Free and good enough” trump “License fee for best in class”.

Although still nascent, this market will start to go through some serious upheaval that will play out over the next two or three years.  I’m going to enjoy watching it and I’d like to invite you along for the ride.  Stay tuned!