Let’s play Clue: Who really killed EMC?

I used to love the board game Clue as a kid (or Cluedo as it’s called back home).  Often when you won, you knew with 100% certainty the who, what, and where for the murder before you made your bold pronouncement.  But sometimes, if you thought someone else was close to solving the murder, you had to take an early best guess with a little less certainty.  And that’s a bit like where I am with EMC.  Do I know for sure who killed EMC..?  No.  But I’m willing to go out on a bit of a limb – I think I can guess who killed EMC, where, and with what weapon.

Since the acquisition of EMC by Dell was announced, there’s been a bit of a kerfuffle in the Bay state.  There’s much hand-wringing that another Boston tech giant is, well, no longer a Boston tech giant.  (EMC is relocating it’s HQ to Texas.)  People have long memories, and the ghost of DEC is apparently still haunting my neighbors as we approach Halloween.  Truthfully, I’m a bit shocked that Dell is being cast in a bad light – a bit of a party crasher, a vulture, a bit of an Ebenezer Scrooge.  So let me put that straight – who really killed EMC?

It was Amazon, in the cloud, with a commodity disk drive. Here’s how:

  • The amount of data is growing by about 40% a year – or doubling every two years.  In an ironic twist, I’ll cite numbers from IDC in research bought and paid for by EMC.  To counter this somewhat, the cost per byte of raw disk storage seems to be halving roughly every three years at the moment.  Bottom line, money is still being spent on storage.
  • The storage hardware segment of EMC’s business (Information Storage) has  struggled for growth.  From EMC’s public financials, from 2012 to 2013, revenues grew 4%.  But, from 2013 to 2014, growth rate for this business slowed to only 2%.  And if this data from IDC is accurate (and I have no reason to think that it’s not), EMC lost market share and saw revenues decline early this year – particularly in the lucrative storage systems business.
  • Amazon is building out a colossal computing infrastructure using commodity hardware.  James Hamilton notes this in his excellent presentation from re:Invent 2014:  Amazon saw 132% year-year growth in data transferred in its S3 storage solution, and has over one million customers active on AWS.  Every day Amazon adds enough capacity to AWS to support a $7bn ecommerce operation – effectively all of Amazon’s business back in 2004 when it was a $7bn company.  How much capacity is that?  I’m not sure to be honest, but if Amazon’s average sale in 2004 was $30, that’s over 233m sales transactions that need to be recorded, processed and supported.  Sounds like a lot of storage to me…And I very much doubt Amazon uses EMC’s premium products for that.  As James notes, Amazon typically designs it’s own servers and storage racks.

So, I rest my case.  What used to be stored on EMC systems in corporate data centers is now being stored on cheap disks in Amazon’s cloud.  Amazon did it, Amazon killed EMC.

(Originally published on industrial-iot.com, a blog by ARC Advisory Group analysts)

Re-inventing Healthcare: We need the college scorecard for healthcare

I have college-age kids just around the corner.  It’s a scary time – not least because I was fortunate enough to get my undergraduate degree in the UK at a time when the government paid for it!  Oh happy days…

In the US – and even the UK now – people pay for college out of their own pocket.  But, that doesn’t always mean you get what you pay for.  As I’ve researched colleges with my eldest, it’s been very hard to make a meaningful like-for-like comparison.  Even using so-called college comparison websites.  For example, common measure like the 6 year graduation rate are close to worthless.  So I was excited to see the federal government step in and reveal its own comparison site recently.  I’m sure it will attract criticism, especially from those that are heavily invested in the status quo.

But, now we need the same for healthcare.  We need this for healthcare because without transparency into healthcare there will be no change.   Without change, the US healthcare system is unsustainable.  And that should scare healthcare providers as much as citizens.  Here’s a scenario – imagine I need a total knee replacement.  (I don’t, but those knees have seen a lot of soccer…).  Here’s the problem:

  • How do I chose a knee specialist to perform the surgery?  Where’s the public data – yes, actual data – to help me as a consumer sort the best, from the good, from the mediocre?  It doesn’t exist.
  • Where is the public data to help me compare costs – the cost of the surgeon, and the cost of the hospital or facility for a start?  It doesn’t exist.

Caleb Stowell, MD and Christina Akerman, MD are of course right when they say that better value will come from improving outcomes.  But, as a consumer, I need visibility into both outcomes and costs to make wise decisions about my healthcare.  Sadly, the governments Hospital Compare website doesn’t even come close to providing what we need.  Without such visibility, there is no real consumer choice, no competition among providers.  Without competition, healthcare costs will continue to spiral out of control.  That’s bad for us, but it’s worse for our children.

Two reasons machine learning is warming up for industrial companies

Machine learning isn’t new.  Expert systems were a strong research topic in the 1970’s and 1980’s and often embodied machine learning approaches.  Machine learning is a subset of predictive analytics, a subset that is highly automated, embedded, and self-modifying.  Currently, enthusiasm for machine learning is seeing a strong resurgence, with two factors driving that renewed interest:

Plentiful data.  It’s a popular adage with machine learning experts:  In the long run, a weaker algorithm with lots of training data will outperform a stronger algorithm with less training data.  That’s because machine learning algorithms naturally adapt to produce better results based on the data they are fed, and the feedback they receive.  And clearly, industry is entering an era of plentiful data. Data generated by the Industrial Internet of Things (IIoT) will ensure that.  However, on the personal / consumer side of things, that era has already arrived.  For example, in 2012 Google trained a machine learning algorithm to recognize cats by feeding it ten million images of cats.Today’s it’s relatively easy to find vast numbers of images, but in the 1980’s who had access to such an image library…?  Beyond perhaps a few shady government organizations, nobody.  For example, eighteen months ago Facebook reported that users were uploading 350 million images every day.  (Yes, you read that correctly, over a third of a billion images every day).  Consequently, the ability to find enough relevant training data for many applications is no longer a concern.  In fact, the concern may rapidly switch to how do you find the right, or best, training data – but that’s another story…

Lower Barriers to Entry.  The landscape of commercial software and solutions has been changed permanently by two major factors in the last decade or so:  Open source and the cloud.  Red Hat – twenty-two years old and counting – is the first company that provided enterprise software using an open source business model.  Other companies have followed Red Hat’s lead, although none have been as commercially successfully.  Typically, the enterprise commercial open source business model revolves around a no-fee version of a core software product – the Linux operating system in the case of Red Hat.  This is fully functional software, not a time–limited trial, for example.  However, although the core product is free, revenue is generated from a number of optional services, and potential product enhancements.  The key point of the open source model is this:  It makes evaluation and experimentation so much easier.  Literally anyone with an internet connection can download the product and start to use it.  This makes it easy to evaluate, distribute and propagate the software throughout the organization as desired.

Use of the cloud also significantly lowers the barriers to entry for anyone looking to explore machine learning.  In a similar way to the open source model, cloud-based solutions are very easy for potential customers to explore. Typically, this would just involve registering to create a free account on the provider’s website, and then starting to develop and evaluate applications. Usually, online training and educational materials are provided too.  The exact amount of “free” resources available varies depending on the vendor. Some may limit free evaluation to a certain period, such as thirty days.  Others may limit the number of machine learning models built, or how many times they can be executed, for free. At the extreme though, some providers will provide some limited form of machine learning capacity, free of charge, forever.

Like open source solutions, cloud-based solutions also make it easier – and reduce the risk – for organizations to get started with machine learning applications.  Just show up at the vendors website, register, and get started. Compare both the cloud and open source to to the traditionally licensed, on-premise installed software product. In this case, the purchase needs to be made, a license obtained, software downloaded and installed. A process that could, in many corporations, take weeks to achieve.  A process that may need to be repeated every time the machine learning application is deployed in a production environment…

My upcoming strategy report on machine learning will review a number of the horizontal machine learning tools and platforms available.  If you can’t wait for that to get started, simply type “machine learning” into your search engine of choice and you’re just 5 minutes away from getting started.

(Originally published on industrial-iot.com, a blog by ARC Advisory Group analysts)

Re-inventing Healthcare: Cutting Re-admission rates with predictive analytics

Managing unplanned re-admissions is a persistent and enduring problem for healthcare providers.  Analysis of Medicare claims from over a decade ago showed that over 19% of beneficiaries were re-admitted within 30 days.  Attention on this measure increased when the Affordable Care Act introduced penalties for excessive re-admits.  However, many hospitals – including those in South Florida and Texas – are losing millions in revenue because of their inability to meet performance targets.

Carolinas HealthCare System has applied predictive analytics to the problem, using Predixion Software and Premier Inc.  Essentially, by using patient and population data, Carolinas is able to calculate a more timely, more accurate assessment of the re-admit risk.  The hospital can then put in place a post-acute care plan to try and minimize the risk of re-admission.  You can find a brief ten minute webinar presented by the hospital here.  But, from an analytics, information management  and decision making perspective, here are the key points:

  • The risk assessment for readmission is now done before the patient examination, not after it. Making that assessment early means there is more time to plan for the most appropriate care after discharge.
  • The risk assessment is now more precise, accurate, and consistent.  In the past, the hospital just categorized patients into two buckets – high risk and low risk.  There are now four bands of risk so the care team can make a more nuanced assessment of risk and plan accordingly.  Further, the use of Predixion’s predictive analytics software means that far more variables can be considered to make the determination of risk.  Us puny human’s can only realistically work with a few variables well to make a decision.  Predictive analytics allowed more than 40 data points from the EMR, ED etc. to be used to make a more accurate assessment of risk.  Finally, calculating the risk using software meant that Carolinas could avoid any variability introduced by case managers with different experience and skills.
  • The risk assessment is constantly updated.  In practice, the re-admission risk for any individual patient is going to change throughout the care process in the hospital.  So, a patients re-admission risk is now recalculated and updated hourly – not just once at the time of admission which was situation in the past.
  • The overall accuracy of risk assessment gets better over time.  A software-centered approach means that suggested intervention plans can be built in – so again reducing variability in the quality of care.  But, the data-centric approach means that the efficacy of treatment plans can also be easily measured and adjusted over the long-term.

Overall, this data-driven approach to care is a win-win.  It results in higher care quality and better outcomes for the patient.  And Carolinas HealthCare System improves its financial performance too.  This is all possible because more of the risk assessment is now based on hard data, not intuition.

The cloud changes everything – if you’ll let it…

In my experience, technologies are rarely adopted by corporations as rapidly as expected – or maybe it’s just as rapidly as vendors would like them to be…

Often, the challenge of organizational culture is overlooked.  Few people enthusiastically embrace change, and we’ve surely all experienced new releases or upgrades that really were detrimental.  Change will surely be more difficult for some companies than others when it comes to the Industrial Internet of Things and the adoption of the cloud that will come with that.

But, change is inevitable, like it or not.  At this point, I’m thinking there are really only two types of companies when it comes to cloud adoption:

  1. Companies that have officially blessed putting some data and applications in the cloud, and created policies around that.
  2. Companies that have policies explicitly forbidding use of the cloud – but whose employees are secretly using the cloud anyway!

In the second case, why would those employees commit what, in many cases, is technically a dismissible offense?  It’s usually because some cloud service makes their job much easier to do, whether it’s mere cloud storage or a more sophisticated Software-as-a-Service application.  It’s that simple.

The more I learn and think about the cloud, the more convinced I am that it’s a game changer:

  • A year ago I wrote about how solutions like Amazon’s Redshift had the potential to completely change how business analysts, data warehouse engineers, and even progressive CIOs conceive, design, and execute business intelligence and analytics projects (The Disposable Data Warehouse:  How Will You Use Yours?)
  • At SAP SAPPHIRE NOW in May this year I learned how the cloud helped T-Mobile to complete a proof-of-concept in two weeks, instead of waiting 4 months just to procure the hardware to run the same proof-of-concept on-premise.  In this example, the cloud fosters agility and can help to cut the time needed to bring new products and services to market.
  • In one of my current research projects I’m taking a deeper look at the red-hot world of machine learning.  (So red-hot there are more than 700 startups apparently…).  In this instance, I’m realizing how the cloud can completely change the way enterprises choose software solutions.  Many of the machine learning startups are cloud-based.  That is, users develop, test and deploy their machine learning applications in the cloud.  These solutions typically provide a robust framework to help users get started with their applications quickly.  In this way, the cloud can make the evaluation cycle so much faster for potential buyers:  Pick a cloud-based solution, and try it out for a couple of days.  If you like it, move towards a production application (or a more fully-fledged prototype).  If you don’t like it, just move on – pick another cloud-based machine learning tool and start over…

(Originally published on industrial-iot.com, a blog by ARC Advisory Group analysts)

Oh puleeze, let’s make the Industrial IoT better than this…

The consumer internet of things largely lives in something of a parallel universe to industrial IoT.  Sure, there is going to be overlap in the supporting infrastructure – networks, IoT platforms etc. but the applications are shaping up to be very different.  A big chunk of consumer IoT is focused on wearables – gadgets that we wear that enhance our life in some way, such as fitness trackers or health monitors.  Nevertheless, I was amused to read this article over the weekend.  “The wearable you’ll actually wear, because it doesn’t need charging“.  Wow!  Imagine buying a device that adds so little value that charging it every day actually becomes a chore.  Instead of enhancing your health, it becomes a pain in the <insert body part of your choice>.

I’ll tell you right now, if our first efforts at Industrial IoT are so inept, we’ll kill the opportunity to drive a new wave of efficiency, introduce new business models and revenue streams, stone dead.  For at least 5 years, maybe 10.  Fortunately, it looks like Industrial IoT is fairing better than consumer IoT, with early examples of success from KAESER KOMPRESSOREN SEBP, the trains in Olso, and other examples from the ARC forum in February.  Ralph Rio also writes more about the predictive maintenance opportunity here.

IIoT projects share many similarities with any other IT project.  It’s early days, but so far I have 6 simple guidelines for anyone contemplating an Industrial IoT project:

  1. Start small – your first project is really a proof-of-concept.
  2. Focus on a real, living, breathing, business problem.
  3. Use a multi-disciplinary team – you won’t get very far without one…
  4. Think about what data you have, right now, that you can leverage.  Or, what data can you get, easily…?
  5. Which potential projects promise quick and easy value?  Pick one of those.
  6. Make sure you measure ROI so that you have fuel for future projects.

Motherhood and apple pie in many ways, but important to remember anyway.  What additional guidelines do you have?  Add a comment and share please.

(Originally published on industrial-iot.com, a blog by ARC Advisory Group analysts)

Will Social Media Marketing Kill your Company…?

No denying social media is here to stay – it’s certainly got marketers hot and giddy.  And why not – Twitter and the like provide wonderful channels to engage and subtly influence buyers early in the sales cycle.  But the reality is, there is only so much marketing budget to go around.  And putting too much of it into the social media bucket at the expense of other areas will kill your company.  The 2×2 below shows why.

The optimal situation for your company is to be in the top right quadrant.  That is, both your positioning and marketing messages are well thought through, and your use of social media is strong too.  In other words, you’ve defined a great value proposition and you’re simply awesome at getting the word out. Congrats, you’re well placed to coin it in.

In the top left quadrant things are not optimal, but still pretty good.  In this situation, the messaging and market positioning are still solid, but your use of social media is weaker.  That’s still a pretty good position because you inherently have a strong value proposition, it’s just that your ability to get the message out is lacking a bit without better use of social media.

Moving down, the lower left quadrant is where things start to seriously unravel.  Not only is your messaging poor, but you’re pretty lame at social media too. But, things could be worse – and if you live in the bottom right quadrant, they are!  It may be counter-intuitive, but if you’re weak at messaging, yet great at using social media, your situation is actually pretty dire.  Here’s why:  If you haven’t understood the buyers needs, you can’t build a compelling value proposition.  Without a compelling value proposition, there’s no way you can craft messages that are going to appeal to buyers and bring them into the sales funnel. And no amount of social media is going to change that.

Message vs Social

As the 2×2 matrix shows, with good social media skills all you’re doing is getting a bad message into the market very effectively.  What better way to kill your company with marketing!  Unfortunately, this isn’t as rare as you might think.  I’ve seen a couple of examples recently where social media marketers load up with abbreviations and creative hashtags to squeeze every ounce of value out of their tweets. But, when you get to their homepage, the messaging just isn’t right.  Don’t get me wrong, you need great social marketers.  But equally, you need people who are passionate about positioning and messaging.  Every company needs someone who won’t sleep well at night until they’ve found the right 15 words to define their company’s unique value proposition.  And then thinks deeply, works collaboratively, and persistently, to build out the messages to support that.  ykwim.

Visual Data Discovery: Eat Lunch, or Be Lunch…?

It’s time.  Already.

Monumental shifts in the software industry often follow a 3 phase pattern that inevitably leaves blood on the floor when the dust has settled:

  1. Cheeky young upstart enters the market with a great new idea
  2. Cheeky young upstart starts to rake in serious sales revenue
  3. Established vendors react to nullify the threat and protect their own revenues

Think Netscape and Microsoft. Or MySQL and Oracle – there are plenty of examples.

It’s almost hard to believe, but the still fledgling visual data discovery market is already entering stage 3.  A shake out is inevitable, and inevitably there will be blood on the floor.  The only question is, whose blood?

Of course, if I actually knew the answer to that I’d be a wealthy man. I don’t, and I’m not. But, there are definitely some interesting angles to explore and I’ll be doing that in a series of blogs over the next few months. For example:

  • Is Qliktech, one of the pioneering visual data discovery vendors, struggling, or merely consolidating before it pushes on to bigger and better things? Notably, in Q3 last year, Qlik grew it’s maintenance revenues by almost three times as much as licence revenues (33% vs. 12%).  The full year financial report is on February 20th. so I’ll be trying to get more insight from that.
  • Tableau are reporting their latest financials on February 4th. I love Tableau as a product, it’s just such fun to use. But as a company there are surely challenges ahead. Excellent though Tableau is at visual data discovery, it has no ambitions that I know of to provide a full portfolio of BI solutions. That will become a problem (see below).
  • And then, there are the older, long established BI vendors that have been in the reporting and/or dashboard game for many years:  SAP, Oracle, IBM Cognos, MicroStrategy and Information Builders just to name the biggest and most well known.  Now that vendors such as Qliktech, Tableau and TIBCO Spotfire have clearly shown the potential (measured in dollars) of a new class of BI tool, the established vendors all want a piece of the action too.  Hence the introduction of SAP Lumira, MicroStrategy Analytics Desktop etc. over the last 18 months.  The key question here is when will “Free and good enough” trump “License fee for best in class”.

Although still nascent, this market will start to go through some serious upheaval that will play out over the next two or three years.  I’m going to enjoy watching it and I’d like to invite you along for the ride.  Stay tuned!