A better approach to Data Management, from Lakes to Watersheds

 

data managementAs a data scientist, I have a vested interest in how data is managed in systems. After all, better data management means I can bring more value to the table. But I’ve come to learn, it’s not how an individual system manages data but how well the enterprise, holistically, manages data that amplifies the value of a data scientist.

Many organizations today create data lakes to support the work of data scientists and analytics. At the most basic level, data lakes are big places to store lots of data. Instead of searching for needed data across enterprise servers, users pour copies into one repository – with one access point, one set of firewall rules (at least to get in), one password (hallelujah) … just ONE for a whole bunch of things.

Data scientists and Big Data folks love this; the more data, the better. And enterprises feel an urgency to get everyone to participate and send all data to the data lake. But, this doesn’t solve the problem of holistic data management. What happens, after all, when people keep copies of data that are not in sync? Which version becomes the “right” data source, or the best one?

If everyone is pouring in everything they have, how do you know what’s good vs. what’s, well, scum?

I’m not pointing out anything new here. Data governance is a known issue with data lakes, but lots of things relegated to “known issues” never get resolved. Known issues are unfun and unsexy to work on, so they get tabled, back-burnered, set aside.

Organizations usually have good intentions to go back and address known issues at some point, but too often, these challenges end up paving the road to Technical Debt Hell. Or, in the case of data lakes, making the lake so dirty that people stop trusting it.

To avoid this scenario, we need to go the source and expand our mental model from talking about systems that collect data, like data lakes, to talking about systems that support the flow of data. I propose a different mental model: data watersheds.

In North America, we use the term “watershed” to refer to drainage basins that encompass all waters that flow into a river and, ultimately, into the ocean or a lake. With this frame of reference, let’s contrast this “data flow” model to a traditional collection model.

In a data collection model, data analytics professionals work to get all enterprise systems contributing their raw data to a data lake. This is good, because it connects what was once systematically disconnected and makes it available at a critical mass, enabling comparative and predictive analytics. However, this data remains contextually disconnected.

Here is an extremely simplified view of four potential systematically and contextually disconnected enterprise systems: Customer Relationship Management (CRM), Finance/Accounting, Human Resources Information System (HRIS), and Supply Chain Management (SCM).

CRM Finance/Accounting HRIS SCM
Stores full client name and system generated client IDs Stores abbreviated customer names (tool has a too-short character limit though) and customer account numbers Stores all employee names, employee IDs
Stores products purchased; field manually updated by account manager Stores a list of all company Locations; uses 3-digit country codes Stores a list of all company locations with employee assignments; uses 2-digit country codes Maintains product list and system- generated product ID
Stores account manager names Stores abbreviated vendor names (same too-short character limit), vendor account numbers and vendor IDs with three leading zeros. Stores vendor names, vendor account numbers, vendor IDs (no leading zeros)
Stores Business Unit (BU) names and BU IDs Stores material IDs and names
Goal: Enable each account manager to track the product/contract history of each client Goal: Track all income, expenses and assets of the company Goal: Manage key details on employees Goal: Track all vendors, materials from vendors, Work in Progress (WIP), and final products

 

Let’s assume that each system has captured data to support its own reporting and then sends daily copies to a data lake. That means four major enterprise systems have figured out multiple privacy and security requirements to contribute to the data lake. I would consider this a successful data collection model.

Note, however, that the four systems have overlap in field names, and the content in each area is just a little off — not so far as to make the data unusable, but enough to make it difficult. (I also intentionally left out a good connection between CRM Clients and Finance/Accounting Customers in my example, because stuff like that happens when systems are managed individually. And while various Extract, Transform and Load (ETL) tools or Semantic layers could help, this is beyond CRM Client = Finance/Accounting Customer.)

If you think about customer lists, it’s not unreasonable for there to be hundreds, if not thousands, of customer records that, in this example, need to be reconciled with client names. This will have a significant impact on analytics.

Take an ad hoc operational example: Suppose a vendor can only provide half of the materials they normally provide for a key product. The company wants to prioritize delivery to customers who pay early, and they want to have account managers call all others and warn them of a delay. That should be easy to do, but because we are missing context between CRM and Finance/Accounting, and the CRM system is manually updated with products purchased, some poor employee will be staying late to do a lot of reconciling and create that context after the fact.

I’ve heard plenty of data professionals comment something like, “I spend 90% of my time cleaning data and 10% analyzing it on a project.” And the responses I hear are not, “Whaaaa?? You’re doing something wrong.” They are, “Oh man, I sooooo know what you mean.”

Whaaaa?? We’re doing something wrong.

The time analytics professionals spend cleaning and stitching data together is time not spent discovering correlations, connections and/or causation indicators that turn data into information and knowledge. This is ridiculous because today’s technologies can do so much of this work for us.

The point of a data watershed approach is to eliminate the missing context. The data watershed is not a technical model for how to get data into a lake; it’s a governance/technical model that ensures data has context when it enters a source system, and that context flows into the data lake.

If we return to my four example systems and take a watershed approach, the interaction looks more like this, with the arrows indicating how the data feeds each system:

Without data management, forget AI and machine learning in health care - Government Data Connection

While many organizations do have data flowing from system to system, they often don’t have connections between every system. Additionally, it’s not always clear who should “own” the master list for a field.

In my view, the system that maintains the most metadata around a field is the system that “owns” the master data for that field. So, in my example above, both the HR and Finance/Accounting systems maintain Location lists, but they use different country codes. Finance/Accounting is either going to maintain depreciation schedules or lease agreements on the locations, as well, thus Finance/Accounting wins. The HRIS system, unless there is a tool limitation, should mirror and, preferably, be fed the location data from the Finance/Accounting system.

In this example, when each system sends its data to a data lake, it has natural context. Data analytics professionals can grab any field and know the data is going to match – though I would argue that best practice would be to use the field from the “master” system. However, if everything is working right, this should be irrelevant.

Since a data watershed is a governance/technical model, it addresses, not just how data flows, but how it’s governed. This stewardship requires cross-departmental collaboration and accountability. The processes are neither new nor necessarily difficult – but the execution can be complex. The result is worth the effort though, as all enterprise data supports advanced analytics.

The governance model I picture is an amalgamation of DevOps – the merging of software development and IT operations – and the United Federation of Planets (UFP) from “Star Trek.”

By putting data management and data analytics together in the same way the industry has combined software developers and IT operations, there is less opportunity for conflicting priorities. And, any differences must be reconciled if the project hopes to succeed.

After borrowing from the DevOps paradigm, the reason the governance model I like best is the UFP – and not just because I get to drop a Trekkie reference – is because it is the government of a large fictional universe, built on the best practices and known failures of our own individual government structures.

The UFP has a central leadership body, an advising cabinet and semiautonomous member states. I think this set up is flexible enough to work with multiple organizational designs and enables holistic data management while addressing the nuances of individual systems.

I would expect the “President of the Federation” to be a Chief Information, Technology, Data, Analytics, etc. Officer. The “Cabinet” would be made up of Master Data Management (MDM), Records and Retention, Legal, HR, IT Operations, etc. And the “Council” members would be the analytics professionals from all the data-generating and -consuming business units in the organization.

And, it’s this last part – a sort of Vulcan Bill of Rights – I feel the strongest about:

Whoever is responsible for providing the analytics should be included in the governance of the data. Those who have felt the pain of munging data, know what needs to change – and they need to be empowered to change it.

Data watersheds represent an important shift in thinking. By expanding the data lake model to include the management of enterprise data at its source, we change the conversation to include data governance in the same breath as data analytics — always.

With this approach, data governance isn’t a “known issue” to be addressed by some and tabled by others; it’s an integral part of the paradigm. And while it may take more work to implement at the outset, the dividends from making the commitment are immense: Data in context.

From hysteria to reality, Risk-Based Transformation (RBT)

Risk based transformation

The digital movement is real. Consumers now possess more content at their fingertips than ever before, and it has impacted how we do business. Companies like Airbnb, Uber and Waze have disrupted typical business models, forcing established players in different industries to find ways to stay relevant in the ever-emerging digital age. This post is not about that. Well, not in the strictest sense.  There are countless articles explaining the value of being digital. On the other hand, there are very few articles about how to get there. Let’s explore how to get there together, through an approach that I have named Risk-Based Transformation. RBT’s strength is that it puts technology, application, information and business into one equation.

An approach that fits your specific needs

I’m relocating very soon, and with that comes the joys of a cross-country journey. Being the planning type, I started plotting my journey. I didn’t really know how to start, so I went to various websites to calculate drive times. I even found one that would give you a suggested route based on a number of inputs. These were great tools but they were not able to account for some of my real struggles, like how far is too far to drive with a 5- and 3-year-old.

Where are the best rest stops where we can “burn” energy — ones that have a playground or a place to run? (After being cooped up in a car for hours, getting exercise is important!) How about family- and pet-friendly places to visit along the way to break up the trip? What about the zig-zag visits we need to make to see family?

The list goes on. So while I was able to use these tools to create a route, it wasn’t one that really addressed any of the questions that were on my mind. Organizations of all sizes and across all industries are on this digital journey but often the map to get there is too broad, too generic, and doesn’t provide a clear path based on your unique needs.

A different approach is needed, one in which you can benefit from the experience of others, whilst taking the uniqueness of your business into account. Like planning a trip, it’s good to use outside views in particular to give that wider industry view; however, that’s only a piece of the puzzle. Each business has its own culture, struggles and goals that bring a unique perspective.

RBT framework

To help with this process, I have created a framework for RBT. At a high level, RBT takes into account your current technology (infrastructure), application footprint, value of the information, and risk to the business. From left to right, the least weight to the highest. This framework gives a sense as to where to where to start and where the smart spend is. See flow below:

Risk-Based Transformation

Following this left to right, you can add or remove evaluation factors to this based on your needs. Each chevron has a particular view, in a vacuum if you will, so the technology is rated based only on itself. It will gain its context as you move through each chevron. This will give you a final score. The higher the score, the higher the risk to the business.

Depending on your circumstances, you can approach it David Letterman style and take your top 10 list of transformation candidates and run it through the next logic flow (watch for a future blog on how to determine treatment methodology). Or, as we did with a client recently, you can start with your top 50 applications. The point is to get to a place that enables you to start making informed next steps that meet your needs and budget to get the most “bang” for your investment.

The idea behind this framework is to use data in the right context to present an informed view. For example, you can build your questionnaires on SharePoint or Slack or another collaboration platform that also allows the creation of dashboard views. You can build dashboards in Excel, Access, MySQL or whatever technology you’re comfortable with in order to build an informed data-driven view, evaluating risk against transformation objectives. The key is that you need to assign values to questions in order to calculate consistent measurements across the board.

Service management example

Let’s take service management as an example. Up front you would need to determine what “good” looks like, and then based on that, have questions like these below answered:

Service Management

These questions could be answered by IT support, application support, business owners, the life cycle management group, or other relevant groups. When we ran through the first iterations of this framework, we had our client fill it out first. Then we filled it out based on our data points. Our data points looked different, as it was an outsourcing client in which we owned their IT. We had the view in a vacuum of what we had both inherited, the equipment from the existing estate that had been transferred to us, and what we had newly built.

We also had access to the systems that the client did not have, as they no longer had root access to these systems. The client’s context included future plans for life cycle, as they owned life cycle management. With those combined views, we had a broader sense of the environment. This methodology could be used with business units, allowing them to give their view of these systems, which gave IT an even more rounded view because it enabled us to see how the client (business) saw their environment versus how we (IT support) saw it.

That data was then normalized to give a joint view for senior leadership. The idea is that this became a jointly owned view, backed up with data, of the way forward that IT leadership could confidently stand behind. The interesting part is although the age of the server estate was 5 – 10 years old, we realized that upgrading the infrastructure was not the smartest place to start. In fact, the actual hardware was determined to be the lowest risk. The highest risk was storage, which was quite a surprise to all.

RiskBased Transformation

A living framework

Many years ago, when you plotted your cross-country drive on your map it was based on information from a fixed time. This was the best route when I drew the line on the map. Now, personal navigation devices hooked into real-time data change that course based on current conditions. In the same way, the RBT model is a living framework; it should have regular iterations in order to have course corrections as you go forward.

The intent with this framework and thinking is to build a context that makes sense for your needs, and then present data in context that allows for better planning. That better planning should lead to a more efficient digital journey as we all continue to stay with, or ahead of, the curve.

If you have enjoyed this, look forward to my next post. There I will detail how the RBT framework is applied and the treatment buckets methodology.

Insurers’ appreciation for orthogonal data

Orthogonal data

It is anticipated that within the next three years, on average every human being on the planet will create about 1.7 megabytes of new information every second. This includes 40,000 Google searches every second, 31 million Facebook messages every minute, and over 400,000 hours of new YouTube videos every day.

At first glance, the importance of this data may not be obvious. But for the insurance industry, tapping into this and other kinds of orthogonal (statistically independent) data is key to finding new ways to create value.

A clearer picture of individual risk

By paying closer attention to the data people create as part of their everyday lives, insurance companies can better anticipate needs, personalize offers, tailor customer experience and streamline claims. Using a wider variety of information is especially useful in better understanding and managing individual risks. For instance, behavior data from sensors, shared through an opt-in customer engagement program, provides insurers with the insight needed to initially assess and price the risk, and mitigate or even prevent subsequent losses.

Take, for example, the use of telematics data from sensors embedded in cars and smartphones. When shared, the raw telemetry data provides insurers with insight into an individual’s actual driving behaviors and patterns. Insurers can reward lower-risk drivers with discounts or rebates while providing education and real-time feedback to help improve the risk profile of higher-risk drivers. Geofencing and other location-based services can further enhance day-to-day customer engagement. In the event of an accident, that same sensor data can be used to initiate an automated FNOL (first notice of loss), initially assess vehicle damage, and digitally recreate and visualize events before, during and after the crash.

Using individual driver behavior to monitor and manage risk is just one way to leverage orthogonal data in insurance. Ultimately, new behavioral and lifestyle data sources have the potential to transform every aspect of the insurance value chain. Forward-looking insurers will tap into these emerging data sources to drive product innovation, deepen customer engagement, improve safety and well-being and even prevent insured losses. For those who invest in the platforms and tools needed to harness the value of orthogonal data, the advantages will be significant.

The Ultimate Data Analysis Cheat Sheet: Tool for App Developers

 Cheat Sheet tool

Analytic insights have proven to be a strong driver of growth in business today, but the technologies and platforms used to develop these insights can be very complex and often require new skillsets. One of the initial steps in developing analytic insights is loading relevant data into your analytics platform. Many enterprises stand up an analytics platform, but don’t realize what it’s going to take to ingest all that data.

Choosing the correct tool to ingest data can be challenging. Anteelo has significant experience in loading data into today’s analytic platforms and we can help you make the right choices. As part of our Analytics Platform Services, anteelo offers a best of breed set of tools to run on top of your analytics platform and we have integrated them to help you get analytic insights as quickly as possible.

To get an idea of what it takes to choose the right data ingestion tool, imagine this scenario: You just had a large Hadoop-based analytics platform turned over to your organization. Eight worker nodes, 64 CPUs, 2,048 GB of RAM, and 40TB of data storage all ready to energize your business with new analytic insights. But before you can begin developing your business-changing analytics, you need to load your data into your new platform.

Keep in mind, we are not talking about just a little data here. Typically, the larger and more detailed your set of data, the more accurate your analytics are. You will need to load transaction and master data such as products, inventory, clients, vendors, transactions, web logs, and an abundance of other data types. This will often come from many different types of data sources such as text files, relational databases, log files, web service APIs, and perhaps even event streams of near real-time data.

You have a few choices here. One is to purchase an ETL (Extract, Transform, Load) software package to help simplify loading your data. Many of the ETL packages popular in Hadoop circles will simplify ingesting data from various data sources. Of course, there are usually significant licensing costs associated with purchasing the software, but for many organizations, this is the right choice.

Cheat Sheet tool for data analytics

 

Another option is to use the common data ingestion utilities included with today’s Hadoop distributions to load your company’s data. Understanding the various tools and their use can be confusing, so here is a little cheat sheet of the more common ones:

  • Hadoop file system shell copy command – A standard part of Hadoop, it copies simple data files from a local directory into HDFS (Hadoop Distributed File System). It is sometimes used with a file upload utility to provide users the ability to upload data.
  • Sqoop – Transfers data from relational databases to Hadoop in an efficient manner via a JDBC (Java Database Connectivity) connection.
  • Kafka – A high-throughput, low-latency platform for handling real-time data feeds, ensuring no data loss. It is often used as a queueing agent.
  • Flume – A distributed application used to collect, aggregate, and load streaming data such as log files into Hadoop. Flume is sometimes used with Kafka to improve reliability.
  • Storm – A real-time streaming system which can process data as it ingests it, providing real-time analytics, ETL, and other processing of data. (Storm is not included in all Hadoop distributions).
  • Spark Streaming – To a certain extent, this is the new kid on the block. Like Storm, Spark Streaming is a processor for real-time streams of data. It supports Java, Python and Scala programming languages, and can read data from Kafka, Flume, and user-defined data sources.
  • Custom development – Hadoop also supports development of custom data ingestion programs which are often used when connecting to a web service or other programming API to retrieve data.

As you can see, there are many choices for loading your data. Very often the right choice is a combination of different tools and, in any case, there is a high learning curve in ingesting that data and getting it into your system.

Reasons why insurers need AI to combat fraud ahead of time.

AI to combat fraud

The insurance industry consists of more than 7,000 companies that collect more than $1 trillion in premiums annually, providing fraudsters with huge opportunities to commit fraud using a growing number of schemes. Fraudsters are successful too often. According to FBI statistics, the total cost of non-health insurance fraud is estimated at more than $40 billion a year.

Fighting fraud is like aiming at a constantly moving target, since criminals constantly hone and change their strategies. As insurers offer customers additional ways to submit information, fraudsters find a way to exploit new channels, and detecting issues is increasingly challenging because threats and attacks are growing in sophistication. For example, organized crime has found a way to roboclaim insurers that set up electronic claims capabilities.

Advanced technologies such as artificial intelligence (AI) can help insurers keep one step ahead of perpetrators. IBM Watson, for instance, helps insurers fight fraud by learning from and adapting to changing business rules and emerging nefarious activities. Watson can learn on the fly, so insurers don’t have to program in changes to sufficiently protect against evolving fraud at all times.

insurers need Artificial Intelligence to combat fraud

Here are four compelling reasons insurers need to begin to address fraud with sophisticated AI systems and machine learning that can continuously monitor claims for fraud potential:

  1. The aging workforce. There are many claims folks who are aging out and will soon retire, taking years of knowledge with them. Seasoned adjusters often rely on their gut instinct to detect fraud, knowing which claims just don’t seem right, based on years of experience. However, incoming claims staff don’t have the experience to know when a claim seems suspicious. Insurers need to seize and convert that knowledge, getting it into a software program or an AI program so that the technology can capture the experience.
  2. Evolving fraud events and tactics. Even though claims people may have looked at fraud the same way for years, the environment surrounding claims is always changing, enabling new ways to commit fraud. Fraud detection tactics that may have worked 6 months ago might not be relevant today. For instance, several years ago when gas prices were through the roof, SUVs were reported stolen at an alarming rate. They weren’t really stolen however — they had just become too costly to operate. Now that gas prices have gone down, this fraud isn’t happening as often. If an insurer programs an expensive rule into the system, 6 months later economic factors may change and that problem may not be an issue anymore.
  3. Digital transformation. Insurers are all striving to go digital and electronic. As they make claims reporting easier, more people are reporting claims electronically, stressing the systems. At the same time, claims staffing levels remain constant, so the same number of workers now have to detect fraud in a much higher claims volume.
  4. Fighting fraud is not the claim handlers’ core job responsibility. The claim adjuster’s job is to adjudicate a claim, get it settled and make the customer happy. Finding fraud puts adjusters in an adversarial situation. Some are uncomfortable with looking for fraud because they don’t like conflict. A system that detects fraud enables adjusters to focus on their areas of expertise.

In the past, insurance organizations relied heavily on their experienced claims adjusters to identify potentially fraudulent claims. But since fraudsters are turning to technology to commit crimes against insurance companies, carriers need to turn to technology to help fight them. Humans will still be a critical component of any fraud detection strategy, however. Today, insurance organizations need a collaborative human-machine approach, since they can’t successfully fight fraud with just one tactic or one system. To fight fraud, humans need machines, and machines need human intervention

Here’s how regulatory intelligence aids strategic decision-making in real time

regulatory intelligence

Data is all around us. It’s created with everything we do. For the life sciences industry, this means data is being collected faster and at a greater rate than ever before. Data takes the form of structured content — from clinical trials, regulatory filings, manufacturing and marketing, drug interactions and real-world evidence — with regard to how drugs are used in healthcare settings. It also is found in unstructured content from the internet of things (IoT), such as social media forums, blogs and so on.

But having massive quantities of data is useless without the regulatory intelligence to make sense of it. Let’s define what we mean by regulatory intelligence. This is about taking multiple data sources and feeding those into a regulatory system that can look at the data, analyze it, make use of it, collect information from it, then take that information to distribute it where it needs to go. This might be to the regulatory agencies requesting updates or information about the drug portfolio to satisfy compliance mandates, it might be to partners that you’re working with, such as trading partners, or it might be consumed internally.

Although referred to as regulatory intelligence, it encompasses many other areas of the product life cycle, including clinical research and development for detailed analysis and safety and pharmacovigilance for signal detection.

Life sciences companies can leverage these different types of data for real-time decision making to protect public safety, respond to supply shortages, protect the brand, advance the brand — for example, into new indications or new markets — and for many other purposes. In this blog, I’ll explore some of these uses of regulatory intelligence in greater depth.

Know your target

Since data is consumed across the life sciences in different ways by different people and different functions, getting to the point of intelligence first requires knowing the target and objective. If there is real-world data indicating adverse events that weren’t detected in clinical trials, having that intelligence early on allows companies to act accordingly — both to protect public safety and to safeguard brand reputation. What action the company takes will depend on what the data shows, as well as what the agencies require. For example, it might simply be to reinforce a message about avoiding other medications or foods while undergoing a specific treatment or it might require a broader response.

Another way data can be leveraged for real-time strategic decision making is to advance the brand. For example, IoT data or data held by the authorities might show weakness in a competitor’s product or weakness in the market — perhaps a gap in a region the company has begun targeting. By leveraging that intelligence, companies can take advantage of those gaps or competitor weaknesses and promote their brand as a better alternative or prepare a new market launch.

Regulatory intelligence might also shine light on other potential indications for your product. These insights might be gathered from IoT sources, such as physician blogs, or from positive side effects observed in clinical trials. The most famous example is Viagra, which initially was studied as a drug to lower blood pressure. As was the case here, not all side effects are negative, and during clinical studies an unexpected side effect led to the drug’s being studied and ultimately approved for erectile dysfunction. Having that regulatory intelligence available gives you the leverage to make the case for expanding clinical studies into new indications and extending therapeutic use.

Adding Real-Time Intelligence

From data to intelligence

Now that we have explored the definition of and some purposes for regulatory intelligence, we should also look at how you get from that point of data to intelligence. An important first step is to deploy the right analytical tool to sift through that data and pull out relevant information. It’s equally important to know how to make use of that data, and that requires knowing your end goal and narrowing the scope of your data search to eliminate extraneous data.

Time and resources can also be saved by leveraging automation to collect data for analysis. Since data is continuously being created, updated and pushed out, automated robotic processes make it possible to keep up to date with the latest findings and pull relevant data into your regulatory operational environment.

Regulatory intelligence is the key to real-time strategic decision making across all areas of research and development. Its importance to the organization can’t be overstated.

Digital health, not genomics! The future of precision medicine.

genomics

What does the term precision medicine mean to you? Typically, people think of precision medicine as being about genomics, but it goes well beyond molecular biology to encompass everything that moves us away from a one-size-fits-all approach to medicine. As far back as 1969, Enid Balint, formerly in charge of the training and research course for general practitioners at the Tavistock Clinic in London, published a paper on “The possibilities of patient-centered medicine,” and described precision medicine as the field that understands the patient as a unique human being.

The question, therefore, is: How do we do that? Certainly, genomics has been widely touted. But another area at the forefront of precision medicine is digital health technology, which Steven Steinhubl, MD, of Scripps Research Translational Institute, addressed in his presentation, “Precision Medicine and the Future of Clinical Practice.” Digital technology moves us in the direction of understanding each patient and away from the current practice of defining health in ways that make little sense to many people. Further in this blog, I am expanding on key elements of Steven’s talk to present a different perspective on precision medicine. While many of the messages in this blog have been raised by Steven, I’d like to offer my perspective as well.

So, what exactly is wrong with current practice in our healthcare system? For starters, the current model is based on a model in which, when you get sick or hurt, you see a doctor and you get fixed. There is little to no incentive for doctors to keep you healthy, and the system rewards them on what is called “activity-based funding” rather than “outcome-based funding” or “value-based care”.

As for population-based benchmarks, they actually don’t work for you as an individual. Let’s take wellness recommendations, such as walking 10,000 steps a day or eating a certain amount of proteins and carbohydrates each day. We know that some people need more and some need fewer carbohydrates and that the 10,000-step benchmark is fairly meaningless at an individual level.

Time to stop the generic trials

As mentioned by Steven, precision medicine is, in fact, already here in several settings. The most prominent is optometry, where an eye exam determines your specific needs, and an optometrist prescribes a pair of glasses tailored entirely to your current condition. You can also pick a model of frame and material that fits your lifestyle (e.g., sports or work) and your taste in fashion. Without this specific focus, you would end up with a generic pair of glasses that might not suit your needs and lifestyle.

Medicine needs to adopt the same approach by moving away from a generic approach to clinical studies and towards trials that focus on individual responses to therapy. In his article titled, Personalized medicine: Time for one-person trials, Nicholas J. Schork looks at the 10 most-prescribed drugs and notes that for every person they help, they fail to improve the condition of between three and 24 people. Some drugs, such as statins, benefit as few as one in 50 people, and some are even harmful to certain ethnic groups because clinical trials have typically focused on participants of European background.

Dosage is also seldom geared towards the individual. We know it’s possible to do this because the company provides dose recommendations based on pharmacokinetic drug models, patient characteristics, medication concentrations, and genotype.

Generally, however, we don’t know who will benefit from a drug and who won’t. While genomics plays a key role, there are multiple other factors that have an impact on outcomes, including our environment (e.g., city vs. rural), having access to good produce or being limited to convenience store food (e.g., doughnuts vs. fruits and veggies), whether we live in a cold or hot climate, whether we live in an industrial area with pollution, and what our work and family environment is like. Taking all these factors and more into account is essential if we are to treat each person as unique.

With the growing realisation about these effects, more clinicians are turning to digital technology, deploying internet of things (IoT) sensors and smartphones to improve patient outcomes. A study of 2,000 Americans shows that the average person uses his or her smartphone 80 times per day, so why not leverage it as part of a care plan? The fact is that people are already using their phones for health, with one out of 20 Google searches being health-related.

precision medicine

Setting baselines with sensors

Sensors and apps are being used by many people to check their vitals and provide far more relevant information than using standard measures for what is normal with sleep patterns, heart rate, blood pressure, glucose, temperature and stress. The context in which these measures are taken varies dramatically. For example, maybe it is normal for my stress and blood pressure level to rise when I’m rock climbing, and perhaps a pregnant woman can expect her sleep pattern to change.

Expanding on Steven’s idea, wearable IoT devices are redefining the human phenotype (i.e., all of the observable physical properties of an individual) by performing unobtrusive and continuous monitoring of a wide range of characteristics unique to each of us. This will allow us to define our “normal” blood pressure when we are stressed. After all, do you really need to worry if your blood pressure rises when you’re stuck in traffic after a busy day at the office?

Sensor technology enables continuous monitoring, so you can create a baseline and compare your own readings. When something doesn’t feel right, you’ll be able to go back and compare it to a day when you did feel right the month before. This is a far better measure of your own health.

Genomics Is Evolving

For example, a study into temperature shows that although your normal temperature should be around 37 degrees C, the normal temperature of a person can vary from 33.2 degrees C up to 38.4 degrees C. This means that if your normal temperature is 33.2 degrees C and you have a 37-degree C temperature, you’re having a pretty severe fever, but most doctors won’t realise this because they don’t know your normal temperature.

Another study shows that although the average daytime heart rate is around 79, the normal heart rate of a person varies from 40 to 90. This makes a big difference when treating a patient for a heart condition. This data comes from Fitbit’s analysis of 100,000 persons’ resting heart rates. So obviously, you can’t apply a population average to your own body. This is important because with the trends in your heart rate, we’d be able to find early signs of influenza, for example.

The challenge for people who have wearables (like me, yeah, I own a Fitbit … how cool am I?), is that we’re not quite sure what to do with all that data.

Following this trend, the National Institutes of Health in the United States has created the All of Us Research Program, the largest precision medicine longitudinal study ever performed, which aims to follow 1 million people from all walks of life for decades. The program will provide a set of IoT wearable sensors to the participants and then correlate this data with their clinical data from the healthcare ecosystem — hospitals, family practitioners, specialists, etc.

This study differs from your typical research study because this program will provide insights on the data to its participants, so they can improve their health in real time.

Today, anyone has access to wearable technology; it’s relatively cheap and easy to use, and it gives you real-time insights into your own health. Don’t be afraid to build your own baseline and talk to your doctor. As more people and clinicians embrace wearables and apps, we’ll start to see a broader shift towards precision medicine supported by both genomics and digital health.

How to make your enterprise analytics platforms more data democratized

Managing Enterprise Analytics

It wasn’t that long ago that data was a necessary but costly business byproduct that many companies shelved on leftover and decommissioned hardware, and only because they were legally required to do so. That’s changed, of course. Data’s value has grown exponentially in just the last few years because we’ve found that when you combine, analyze and exploit it in the right ways, it can tell you some amazing things about your company and your customers.

A big step in that direction is the concept of “data democratization.” The idea is simple. When you make data available to anyone at any time to make decisions, without limits related to access or understanding, you’re able to realize the full value of the data you maintain. Where IT was once the gatekeeper of data, new tools and technologies help any user gain access. Insights from that data can be developed by anyone, not just a data engineer or data scientist.

Case in point: Many analytics platforms offer some level of universal access to information, but the ability to use it is inherently restricted to people who understand how to use complex analytics tools. However, self-service tools, like Zaloni, are helping to democratize those analytics platforms. By combining drag-and-drop user interfaces with a powerful data catalog used to search for data, these tools can help non-technical users identify relevant data sources and create new datasets tailored specifically for an analytics task.

 

Analytics

Data democratization isn’t just a benefit for end users, it liberates data scientists as well. With users able to run their own queries, data scientists and engineers can spend more time identifying data sources, preparing them for ingestion, and cleaning and documenting them for use.

Implementing modern, self-service enabled tools raises new questions about security and privacy, so it’s important for companies to have governance in place that can ensure data is carefully managed. Additionally, anyone who plans to use these tools still needs to receive training—not only on how to use the tools, but how to ask questions and seek insights that are valuable to the company. Having governance in place for your self-service tools ensures data privacy and data quality, provides data lineage, and allows a company to provide role-based access control to data. Zaloni’s UI, for example, ensures self-guided access to data to easily get answers to questions and access pertinent data. In today’s highly regulated world, right-sized data governance and role-based security have become a requirement, not just a nice to have.

Many companies have been accumulating vast troves of data that contains a lot of unrealized value. Implementing tools that give everyone access to that data and help them explore new ideas and connections is likely to result in some surprising and valuable discoveries.

Dynamic Healthcare System: Blurring barriers between payer and provider

 Dynamic Healthcare System

Recent headlines have been full of news about major healthcare mergers and acquisitions, often involving newcomers to the industry, but also creating a convergence of traditional payerprovider and pharmaceutical benefit management companies.

Here are some of the latest examples in the changing healthcare scene:

CVS Health, a large pharmaceutical benefit manager, is purchasing Aetna, a large insurer, while Cigna, another large insurer, is acquiring Express Scripts, another pharmaceutical benefit manager.

Meanwhile, tech giants Amazon and Apple took some giant steps into the healthcare fray. Amazon entered into a joint venture with Berkshire Hathaway and J.P. Morgan Chase in an effort by all three to control employer costs, and Amazon also purchased PillPack, an online pharmacy company, and expects to expand services after obtaining state licenses. Apple showed its commitment to shake up the healthcare status quo by expanding its personal health record system, partnerships with hospitals and A.C. Wellness centers – all with a goal of gaining greater influence on healthcare consumption.

The convergence moves the industry away from the traditional separation of payers (health insurance companies and self-insured employers) and providers. Typically, payers are defined as the organizations that conduct actuarial analysis and manage financial risk by collecting premiums and managing payments for services delivered. Providers, meanwhile have typically been defined as healthcare practitioners and organizations that deliver and bill for services, including inpatient, outpatient, elective and emergent.

Those narrow definitions have been shaken up in the post-Affordable Care Act (ACA) world. In the past, the focus was on fee-for-service and capitated contracts under which HMOs or managed care organizations paid a fixed amount for its members to a provider. But the ACA moved the emphasis to value-based care, pushing more financial risk onto providers and away from payers. That means insurers and providers also need to consider how they manage pre-existing conditions and use risk scoring to determine the likely needs of their patients, as their approach can make the difference between profitable success and unprofitable failure.

In this new and complex environment, mergers and acquisitions are seen as a way for both providers and payers to build up their capabilities and respond to the need to enhance patient care, improve population health and reduce costs.

For traditional healthcare incumbents, we believe this also means using a “secret” weapon non-traditional players already leverage: data analytics.

Better data and analytics life cycle management can yield the insights payers and providers need to balance their priorities and deliver value-based care.

payer and provider

How to balance risk and patient outcomes

But first, what do all of these changes entail, and how do they take providers and payers beyond their narrower definitions?

In the post-ACA world, providers are looking to take more financial risk as their actuarial capabilities improve. This would allow them to negotiate more effectively with payers to achieve care outcomes objectives while balancing reimbursement and risk.

Payers, meanwhile, are acquiring doctors’ offices and other providers, or combining with retail clinics and other points-of-care to combine care delivery with financial risk management. To accomplish these goals, payers need to take a more active role in managing the healthcare professionals that they employ as well as the patients who visit those practitioners. Having access to the care delivery setting also allows for greater accuracy.

Managing these activities – by both the provider and the payer – needs to go beyond just financial management. It needs to include operational excellence, using robust data analytics to communicate with people and organizations delivering care. It also requires having performance-level agreements and bidirectional communication in place to measure and monitor reasonable objectives set by both payer and provider. Indeed, collaboration and communication will be crucial to overcome tensions that are building as providers try to deliver on value-based contracts. Finding a way to integrate insights from the back-end will help to ensure both the payer and provider perspectives are understood.

Use data to your advantage

A balance between the needs of the provider and the payer – while prioritizing the needs of the patient – will require change management and deeper insights on what works, what doesn’t and how outcomes for all stakeholders can be adjusted and improved. Those insights must be based on hard data, which will require more robust data, analytics and IT infrastructure. Organizations will need to deploy data and analytics life cycle management – including input, ingestion, management, storage and data utility. Integrated workflows make it easy to collect better, well-rounded encounter data, improving how providers work and increasing provider and patient satisfaction.

That data needs to encompass all parts of the healthcare continuum, meaning patient experience as well as provider and payer data. For this to happen, payers and providers must ensure better consumer engagement by spurring patients to take charge of their own care and using the data provided by patients to improve insights. Being able to see the end-to-end experience of the patient can affect the pieces accordingly.

Brave new healthcare environment

This brings us full circle to the changing industry dynamics and the entry of non-traditional players into the healthcare arena, since the big tech players such as Amazon, Apple and Alphabet know how to leverage data analytics to gain customer insights. As healthcare incumbents build and acquire assets, they will need to match these capabilities and build on their own strengths to ensure they aren’t left behind in this brave new healthcare environment.

The Internet of Things aiding Healthcare

Internet of things in healthcare

There’s so much talk across healthcare about electronic medical records (EMRs). For many, it seems to be the answer to every question, solving all problems of healthcare. At a recent Health Information Technology WA (Western Australia) conference in Perth, for example, three plenary speakers on the main stage were touting its benefits. Unfortunately, the reality is quite different.

Looking at global trends and the shift to value-based care, I believe there’s ample reason to question whether electronic medical records (EMRs) are actually the right approach, especially when the objective has changed from a hospital-centric approach to a patient-focused model that goes beyond the walls of the hospital. There’s also every reason to question whether it is a sound investment. For example, since 2011, the United States has spent $38.4 billion implementing 30-year-old EMR technology in hospitals, according to a 2018 Centers for Medicare & Medicaid Services report. Yet despite successfully computerising health practices, data is still largely locked into hospital systems, and sharing data across health systems remains difficult.

With the healthcare model shifting towards prevention and personalized care, providers and payers are rethinking their approach, and instead are turning to technologies such as the internet of things (IoT) to engage patients, improve outcomes and bring down the cost of care.

IoT in Healthcare: Benefits, Use-Cases and Challenges

From patient to customer

One healthcare organization that took a truly innovative approach to a customer-centric healthcare model is an academic health centre based in the United States. Renowned for its population health studies, the centre’s former chief executive officer wanted to engage patients as consumers, based on a simple objective — to keep those with chronic diseases out of hospital.

The project began with the creation of an innovation group, headed by a chief experience officer overseeing a multi-disciplinary team from customer-centric industries, such as hospitality, publishing, entertainment and automobiles. Most notably, there are no technologists from EMR/EHR (electronic health records) vendors within this group. To this progressive team, the health center added clinicians, who were given access to over 30 million patient records dating back 30 years to analyze the social determinants affecting chronic illnesses such as hypertension, diabetes, chronic obstructive pulmonary disease (COPD) and heart disease.

Based on a set of algorithms, the team was able to identify three social determinants that have the greatest impact on chronic disease:

  1. Access to transportation – Can you get to and from your job and school easily?
  2. Access to good food – Do you have access to quality produce or is the only store accessible from your house a 7/11 selling “convenience” food?
  3. Access to education – Is there a good school in your area with good teachers?

But how do you get good information from patients/consumers on these issues, given that surveys typically have low participation, with only 30 to 40 per cent of people taking part?

Mobile apps and IoT devices are part of the solution. Unfortunately, most apps are focused on a single condition or health issue, rather than factors that influence the patient’s overall health: socio-economic determinants, your environment, health behavior, as well as the quality of healthcare you receive.

Three months later, the innovation group released a mobile app as a proof of concept.

As part of the programme, patients were given a kit that included a Microsoft wristband, a Bluetooth blood pressure cuff, inhaler and weight scale, all connected to the app. In addition to health monitoring data, the app also captured data on life style, such as whether the patient smokes, exercises, etc.

Scaling outcomes

The pilot was a huge success, but the next step was to scale it to 4,000 patients, which was going to be another significant challenge, considering that the nurse-to-patient ratio is about one nurse for 20 to 40 patients. So, the centre started looking at customer relationship management (CRM) solutions.

Once the digital platform was in place, the innovation group had to redesign a new operating model that would support these 4,000 patients. After testing a few configurations, the team landed on a “pod” model that consisted of one nurse and two health navigators — non-clinical support staff focused on customer relationship management. Because the system works by exception, the care coordinators are notified by the platform when an interaction with the patient is required. The rest is automated by the platform, sending reminders and analysing patterns by using IoT monitoring devices and advanced predictive analytic models.

Success with such a large group of people requires engaging with patients where they are and in a way they can relate to. Thanks to the data gathered, the center knew a lot about these consumers. For example, they knew that most prefer to be contacted by text messages and most were fans of the show, Game of Thrones. With this knowledge, on the evening of the season finale, they reached out to hypertension patients with a simple message: “Tonight is the big night for Game of Thrones, and we know you might get excited, so don’t forget to take your blood pressure before the show, and take your meds if required. Have a good night and enjoy the show!” As trivial as this seems, it is details like this that engage people and empower them to make life style changes.

After 12 months, the new platform and engagement model has given the center huge insights, including enabling providers to predict future chronic disease patients with high levels of accuracy, and it has delivered significant outcomes. Here are a few numbers that I find very compelling: The centre has achieved 95 per cent of customer satisfaction, a 23 per cent reduction in emergency services costs, and reduced the total cost of care by 36 per cent.

Increasingly, no matter the healthcare model, the objective must be to improve health outcomes and keep patients out of hospital as much as possible, not only because it’s better for the patient but also to improve financial outcomes and allow health centers and hospitals to focus on truly innovative, cutting-edge care delivery. That’s not something that can be achieved with an EMR.

error: Content is protected !!