IT – The remote worker’s toolkit

IT the remote worker's toolkit

Enterprise clients have looked to automate IT support for several years. With millions of employees across the globe now working from home, support needs have increased dramatically, with many unprepared enterprises suffering from long service desk wait times and unhappy employees. Many companies may have already been on a gradual pace to exploit digital solutions and enhance service desk operations, but automating IT support is now a greater priority. Companies can’t afford downtime or the lost productivity caused by inefficient support systems, especially when remote workers need more support now than ever before. Digital technologies offer companies innovative and cost-effective ways to manage increased support loads in the immediate term, and free up valuable time and resources over the long-term. The latter benefit is critical, as enterprises increasingly look to their support systems to resolve more sophisticated and complex issues. Instead of derailing them, new automated support systems can empower workers by freeing them up to focus more on high-value work.

Businesses can start their journey toward digital support by using chatbots to manage common support tasks such as resetting passwords, answering ‘how to’ questions, and processing new laptop requests. Once basic support functions are under digital management, companies can then transition to layering in technologies like machine learning, artificial intelligence and analytics among others.

An IT support automation ecosystem built on these capabilities can enable even greater positive outcomes – like intelligently (and invisibly) discovering and resolving issues before they have an opportunity to disrupt employees. In one recent example, DXC deployed digital support agents to help manage a spike of questions coming in from remote workers. The digital agents seamlessly handled a 20% spike in volume, eliminated wait times, and drove positive employee experiences.

Innovative IT support

Innovative IT supports

IT support automation helps companies become more proactive in serving their employees better with more innovative support experiences. Here are three examples:

Remote access

In a remote workforce, employees will undoubtedly face issues with new tools they need to use or with connections to the corporate network. An automated system that notifies employees via email or text about detected problems and personalized instructions on how to fix is a new way to care for the remote worker. If an employee still has trouble, an on-demand virtual chat or voice assistant can easily walk them through the fix or, better yet, execute it for them.

Proactive response

The ability to proactively monitor and resolve the employee’s endpoint — to ensure security compliance, set up effective collaboration, and maintain high performance levels for key applications and networking – has emerged as a significant driver of success when managing the remote workplace.

 For example, with more reliance on home internet as the path into private work networks, there’s greater opportunity for bad actors to attack. A proactive support system can continuously monitor for threat events and automatically ensure all employee endpoints are security compliant.

Leveraging proactive analytics capabilities, IT support can set up monitoring parameters to match their enterprise needs, identify when events are triggered, and take action to resolve. This digital support system could then execute automated fixes or send friendly messages to the employee with instructions on how to fix an issue. These things can go a long way toward eliminating support disruptions and leave the employee with a sense of being cared for – the best kind of support.

More value beyond IT

Companies are also having employees leverage automated assistance outside of IT support functions. These capabilities could be leveraged in HR, for example, to help employees correctly and promptly fill out time sheets or remind them to select a beneficiary for corporate benefits after a major life event like getting married or having a baby.

Remote support can also help organizations automate business tasks. This could include checking on sales performance, getting recent market research reports sent to any device or booking meetings through a voice-controlled device at home.

More engaged employees With the power to provide amazing experiences, automated IT support can drive new levels of employee productivity and engagement, which are outcomes any enterprise should embrace.

Data Centric Architecture

Data Centric architecture

The value proposition of global systems integrators (GSIs) has changed remarkably in the last 10 years. By 2010, it was the waning days of the so-called “your mess for less” (YMFL) business model. GSIs would essentially purchase and run a company’s IT shop and deliver value through right-shoring (moving labor to low cost places), leveraging supply chain economies of scale and, to a lesser degree, automation.

This model had been delivering value to the industry since the ‘90s but was nearing its asymptotic conclusion. To continue achieving the cost savings and value improvements that customers were demanding, GSIs had to add to their repertoire. They had to define, understand, engage and deliver in the digital transformation business. Today, I am focusing on the value GSIs offer by concentrating on their client’s data, rather than being fixated on the boxes or cloud where data resides.

In the YMFL business, the GSIs could zero in on the cheapest, performance compliant disk or cloud to house sets of applications, logs, analytics and backup data. The data sets were created and used by and for their corresponding purpose. Often, they were tenuously managed by sophisticated middleware and applications for other purposes, like decision support or analytics.

Getting a centralized view of the customer was difficult, if not impossible. First, it was due to the stove piping of the relevant data in an application-centric architecture. In tandem, data islands were created for analytics repositories.

Now enters the “Data Centric Architecture.” Transformation to a data-centric view is a new opportunity for GSIs to remain relevant and add value to customer’s infrastructures. It is a layer deeper than moving to cloud or migrating to the latest, faster, smaller boxes.

A great way to help jump start this transformation is by rolling out Data as a Service offerings. Rather than taking the more traditional Storage as a Service or Backup as a Service approach, Data as a Service anticipates and provides the underlying architecture to support a data-centric strategy.

It is first and foremost a repository for collected and aggregated data that is independent of application sources. From this repository, you can draw correlations, statistics, visualizations and advanced analytical insights that are impossible when dealing with islands of data managed independently.

It is more than the repository of the algorithmically derived data lake. A Data as a Service approach provides cost effective accessibility, performance, security and resilience – aimed at addressing the largest source of both complexity and cost in the landscape.

Data as a Service helps achieve these goals by minimizing, simplifying and reducing the data and its movement within and outside of the enterprise and cloud environments. This is achieved around four primary use cases, which range from enterprise storage to backup and long-term retention:

 

 

Each of the cases illustrates the underlying capabilities necessary to cost effectively support the move to a data-centric architecture. Combined with a “never migrate or refresh again” evergreen approach, GSIs can focus on maximizing value in the stack of offerings. This approach is revolutionary.  In past, there was merely a focus on the refresh of aging boxes, or the specifications of a particular cloud service, or the infrastructure supporting a particular application. Today, GSIs can focus on the treasured asset in their customer’s IT — their data

Approaches of Developing a Digital Strategy

B2B Digital Marketing Strategy, Tactics & Examples

Products built by aerospace and defense companies are highly engineered and sophisticated, which means they’re often complex. That’s not a bad thing. But they’re also complex in ways that are undesirable. Products and their constituent parts are tracked in dozens of systems –from design to manufacturing to maintenance — which can result in an average of 26 different reference numbers for each part.

The drive to digital transformation is helping A&D companies recognize that situations like this, which arise from a lack of governance and the absence of an enterprise-wide data strategy, have created substantial costs and risks that have to be addressed to realize the full benefits of digital transformation.

That’s especially true for companies that want to establish a “digital thread” for products and parts throughout their systems. The ability to follow any part throughout the A&D value chain (design, manufacture, service) by following a single digital ID will help A&D companies recognize tremendous cost savings. A digital thread also provides a glass pane for status, reduces rework and errors, improves security, and helps manage compliance and regulatory issues with greater efficiency.

That sounds great. But the key question for many companies remains: How do you get started with an endeavor like that? Many organizations fail to prioritize defining a data strategy on the grounds that it’s either a case of “boiling the ocean” or else an “infinity project” that will deliver little value.

A few key points can help your company move toward a data strategy that allows you to pursue the rest of your digital transformation agenda.

 

1. Make an affirmative decision to manage your data.

All companies make decisions about how they engage with, operate on and leverage their data — whether at an enterprise or project level. Even if a company has no formal data management policy, that in itself is a decision, albeit one that leads to the situation many companies find themselves in today. On the other hand, companies that form a holistic point of view in adopting an enterprise-grade data strategy are well positioned to optimize their technology investments and lower their costs.

2. Establish executive sponsorship and governance.

Sustaining a successful data strategy requires alignment with corporate objectives and enforced adherence. As corporate objectives evolve, so should the data strategy — keeping up not only with how the business is operating, but also with how supporting technologies and related innovations are maturing. This means including representatives from all the domains that are involved. It also means assigning someone with the authority to resolve conflicts between groups. This is a key element to helping federate data across silos and moving to a data hub approach, thus eliminating the need to maintain 26 different part numbers for a single item.

Sustaining a data strategy also means making a specific investment in personnel. Companies that embrace the constructs of a data strategy often define dedicated roles to own these strategies and policies. This ranges from augmenting executive and IT staff with roles such as chief data officer and chief data strategist, respectively, to expanding the responsibilities of traditional enterprise data architects.

digital strategies

3. Get started by instituting good data management practices in smaller programs.

Success demonstrate the value that data management can deliver at a small scale and what it could potentially deliver at the enterprise level.  Applying an Agile methodology, which continually demonstrates short bursts of success, will help gain momentum (like a snowball rolling down a hill) and organizational acceptance.

 

As with any business or technical process, a data strategy has its own lifecycle of continual evolution, maturity, change and scale. But the benefits it makes possible—for example, the ability to construct a digital thread for products and parts—will far outweigh the investment that’s required.

For a thorough view of the process that’s involved in setting digital strategy, read the whitepaper Defining a data strategy by my colleagues, Aleksey Gurevich and Srijani Dey. It offers a concise view of the components of a winning data strategy as well as the steps needed to implement, maintain and evolve it.

From hysteria to reality, Risk-Based Transformation (RBT)

Risk based transformation

The digital movement is real. Consumers now possess more content at their fingertips than ever before, and it has impacted how we do business. Companies like Airbnb, Uber and Waze have disrupted typical business models, forcing established players in different industries to find ways to stay relevant in the ever-emerging digital age. This post is not about that. Well, not in the strictest sense.  There are countless articles explaining the value of being digital. On the other hand, there are very few articles about how to get there. Let’s explore how to get there together, through an approach that I have named Risk-Based Transformation. RBT’s strength is that it puts technology, application, information and business into one equation.

An approach that fits your specific needs

I’m relocating very soon, and with that comes the joys of a cross-country journey. Being the planning type, I started plotting my journey. I didn’t really know how to start, so I went to various websites to calculate drive times. I even found one that would give you a suggested route based on a number of inputs. These were great tools but they were not able to account for some of my real struggles, like how far is too far to drive with a 5- and 3-year-old.

Where are the best rest stops where we can “burn” energy — ones that have a playground or a place to run? (After being cooped up in a car for hours, getting exercise is important!) How about family- and pet-friendly places to visit along the way to break up the trip? What about the zig-zag visits we need to make to see family?

The list goes on. So while I was able to use these tools to create a route, it wasn’t one that really addressed any of the questions that were on my mind. Organizations of all sizes and across all industries are on this digital journey but often the map to get there is too broad, too generic, and doesn’t provide a clear path based on your unique needs.

A different approach is needed, one in which you can benefit from the experience of others, whilst taking the uniqueness of your business into account. Like planning a trip, it’s good to use outside views in particular to give that wider industry view; however, that’s only a piece of the puzzle. Each business has its own culture, struggles and goals that bring a unique perspective.

RBT framework

To help with this process, I have created a framework for RBT. At a high level, RBT takes into account your current technology (infrastructure), application footprint, value of the information, and risk to the business. From left to right, the least weight to the highest. This framework gives a sense as to where to where to start and where the smart spend is. See flow below:

Risk-Based Transformation

Following this left to right, you can add or remove evaluation factors to this based on your needs. Each chevron has a particular view, in a vacuum if you will, so the technology is rated based only on itself. It will gain its context as you move through each chevron. This will give you a final score. The higher the score, the higher the risk to the business.

Depending on your circumstances, you can approach it David Letterman style and take your top 10 list of transformation candidates and run it through the next logic flow (watch for a future blog on how to determine treatment methodology). Or, as we did with a client recently, you can start with your top 50 applications. The point is to get to a place that enables you to start making informed next steps that meet your needs and budget to get the most “bang” for your investment.

The idea behind this framework is to use data in the right context to present an informed view. For example, you can build your questionnaires on SharePoint or Slack or another collaboration platform that also allows the creation of dashboard views. You can build dashboards in Excel, Access, MySQL or whatever technology you’re comfortable with in order to build an informed data-driven view, evaluating risk against transformation objectives. The key is that you need to assign values to questions in order to calculate consistent measurements across the board.

Service management example

Let’s take service management as an example. Up front you would need to determine what “good” looks like, and then based on that, have questions like these below answered:

Service Management

These questions could be answered by IT support, application support, business owners, the life cycle management group, or other relevant groups. When we ran through the first iterations of this framework, we had our client fill it out first. Then we filled it out based on our data points. Our data points looked different, as it was an outsourcing client in which we owned their IT. We had the view in a vacuum of what we had both inherited, the equipment from the existing estate that had been transferred to us, and what we had newly built.

We also had access to the systems that the client did not have, as they no longer had root access to these systems. The client’s context included future plans for life cycle, as they owned life cycle management. With those combined views, we had a broader sense of the environment. This methodology could be used with business units, allowing them to give their view of these systems, which gave IT an even more rounded view because it enabled us to see how the client (business) saw their environment versus how we (IT support) saw it.

That data was then normalized to give a joint view for senior leadership. The idea is that this became a jointly owned view, backed up with data, of the way forward that IT leadership could confidently stand behind. The interesting part is although the age of the server estate was 5 – 10 years old, we realized that upgrading the infrastructure was not the smartest place to start. In fact, the actual hardware was determined to be the lowest risk. The highest risk was storage, which was quite a surprise to all.

RiskBased Transformation

A living framework

Many years ago, when you plotted your cross-country drive on your map it was based on information from a fixed time. This was the best route when I drew the line on the map. Now, personal navigation devices hooked into real-time data change that course based on current conditions. In the same way, the RBT model is a living framework; it should have regular iterations in order to have course corrections as you go forward.

The intent with this framework and thinking is to build a context that makes sense for your needs, and then present data in context that allows for better planning. That better planning should lead to a more efficient digital journey as we all continue to stay with, or ahead of, the curve.

If you have enjoyed this, look forward to my next post. There I will detail how the RBT framework is applied and the treatment buckets methodology.

Insurers’ appreciation for orthogonal data

Orthogonal data

It is anticipated that within the next three years, on average every human being on the planet will create about 1.7 megabytes of new information every second. This includes 40,000 Google searches every second, 31 million Facebook messages every minute, and over 400,000 hours of new YouTube videos every day.

At first glance, the importance of this data may not be obvious. But for the insurance industry, tapping into this and other kinds of orthogonal (statistically independent) data is key to finding new ways to create value.

A clearer picture of individual risk

By paying closer attention to the data people create as part of their everyday lives, insurance companies can better anticipate needs, personalize offers, tailor customer experience and streamline claims. Using a wider variety of information is especially useful in better understanding and managing individual risks. For instance, behavior data from sensors, shared through an opt-in customer engagement program, provides insurers with the insight needed to initially assess and price the risk, and mitigate or even prevent subsequent losses.

Take, for example, the use of telematics data from sensors embedded in cars and smartphones. When shared, the raw telemetry data provides insurers with insight into an individual’s actual driving behaviors and patterns. Insurers can reward lower-risk drivers with discounts or rebates while providing education and real-time feedback to help improve the risk profile of higher-risk drivers. Geofencing and other location-based services can further enhance day-to-day customer engagement. In the event of an accident, that same sensor data can be used to initiate an automated FNOL (first notice of loss), initially assess vehicle damage, and digitally recreate and visualize events before, during and after the crash.

Using individual driver behavior to monitor and manage risk is just one way to leverage orthogonal data in insurance. Ultimately, new behavioral and lifestyle data sources have the potential to transform every aspect of the insurance value chain. Forward-looking insurers will tap into these emerging data sources to drive product innovation, deepen customer engagement, improve safety and well-being and even prevent insured losses. For those who invest in the platforms and tools needed to harness the value of orthogonal data, the advantages will be significant.

The Ultimate Data Analysis Cheat Sheet: Tool for App Developers

 Cheat Sheet tool

Analytic insights have proven to be a strong driver of growth in business today, but the technologies and platforms used to develop these insights can be very complex and often require new skillsets. One of the initial steps in developing analytic insights is loading relevant data into your analytics platform. Many enterprises stand up an analytics platform, but don’t realize what it’s going to take to ingest all that data.

Choosing the correct tool to ingest data can be challenging. Anteelo has significant experience in loading data into today’s analytic platforms and we can help you make the right choices. As part of our Analytics Platform Services, anteelo offers a best of breed set of tools to run on top of your analytics platform and we have integrated them to help you get analytic insights as quickly as possible.

To get an idea of what it takes to choose the right data ingestion tool, imagine this scenario: You just had a large Hadoop-based analytics platform turned over to your organization. Eight worker nodes, 64 CPUs, 2,048 GB of RAM, and 40TB of data storage all ready to energize your business with new analytic insights. But before you can begin developing your business-changing analytics, you need to load your data into your new platform.

Keep in mind, we are not talking about just a little data here. Typically, the larger and more detailed your set of data, the more accurate your analytics are. You will need to load transaction and master data such as products, inventory, clients, vendors, transactions, web logs, and an abundance of other data types. This will often come from many different types of data sources such as text files, relational databases, log files, web service APIs, and perhaps even event streams of near real-time data.

You have a few choices here. One is to purchase an ETL (Extract, Transform, Load) software package to help simplify loading your data. Many of the ETL packages popular in Hadoop circles will simplify ingesting data from various data sources. Of course, there are usually significant licensing costs associated with purchasing the software, but for many organizations, this is the right choice.

Cheat Sheet tool for data analytics

 

Another option is to use the common data ingestion utilities included with today’s Hadoop distributions to load your company’s data. Understanding the various tools and their use can be confusing, so here is a little cheat sheet of the more common ones:

  • Hadoop file system shell copy command – A standard part of Hadoop, it copies simple data files from a local directory into HDFS (Hadoop Distributed File System). It is sometimes used with a file upload utility to provide users the ability to upload data.
  • Sqoop – Transfers data from relational databases to Hadoop in an efficient manner via a JDBC (Java Database Connectivity) connection.
  • Kafka – A high-throughput, low-latency platform for handling real-time data feeds, ensuring no data loss. It is often used as a queueing agent.
  • Flume – A distributed application used to collect, aggregate, and load streaming data such as log files into Hadoop. Flume is sometimes used with Kafka to improve reliability.
  • Storm – A real-time streaming system which can process data as it ingests it, providing real-time analytics, ETL, and other processing of data. (Storm is not included in all Hadoop distributions).
  • Spark Streaming – To a certain extent, this is the new kid on the block. Like Storm, Spark Streaming is a processor for real-time streams of data. It supports Java, Python and Scala programming languages, and can read data from Kafka, Flume, and user-defined data sources.
  • Custom development – Hadoop also supports development of custom data ingestion programs which are often used when connecting to a web service or other programming API to retrieve data.

As you can see, there are many choices for loading your data. Very often the right choice is a combination of different tools and, in any case, there is a high learning curve in ingesting that data and getting it into your system.

Reasons why insurers need AI to combat fraud ahead of time.

AI to combat fraud

The insurance industry consists of more than 7,000 companies that collect more than $1 trillion in premiums annually, providing fraudsters with huge opportunities to commit fraud using a growing number of schemes. Fraudsters are successful too often. According to FBI statistics, the total cost of non-health insurance fraud is estimated at more than $40 billion a year.

Fighting fraud is like aiming at a constantly moving target, since criminals constantly hone and change their strategies. As insurers offer customers additional ways to submit information, fraudsters find a way to exploit new channels, and detecting issues is increasingly challenging because threats and attacks are growing in sophistication. For example, organized crime has found a way to roboclaim insurers that set up electronic claims capabilities.

Advanced technologies such as artificial intelligence (AI) can help insurers keep one step ahead of perpetrators. IBM Watson, for instance, helps insurers fight fraud by learning from and adapting to changing business rules and emerging nefarious activities. Watson can learn on the fly, so insurers don’t have to program in changes to sufficiently protect against evolving fraud at all times.

insurers need Artificial Intelligence to combat fraud

Here are four compelling reasons insurers need to begin to address fraud with sophisticated AI systems and machine learning that can continuously monitor claims for fraud potential:

  1. The aging workforce. There are many claims folks who are aging out and will soon retire, taking years of knowledge with them. Seasoned adjusters often rely on their gut instinct to detect fraud, knowing which claims just don’t seem right, based on years of experience. However, incoming claims staff don’t have the experience to know when a claim seems suspicious. Insurers need to seize and convert that knowledge, getting it into a software program or an AI program so that the technology can capture the experience.
  2. Evolving fraud events and tactics. Even though claims people may have looked at fraud the same way for years, the environment surrounding claims is always changing, enabling new ways to commit fraud. Fraud detection tactics that may have worked 6 months ago might not be relevant today. For instance, several years ago when gas prices were through the roof, SUVs were reported stolen at an alarming rate. They weren’t really stolen however — they had just become too costly to operate. Now that gas prices have gone down, this fraud isn’t happening as often. If an insurer programs an expensive rule into the system, 6 months later economic factors may change and that problem may not be an issue anymore.
  3. Digital transformation. Insurers are all striving to go digital and electronic. As they make claims reporting easier, more people are reporting claims electronically, stressing the systems. At the same time, claims staffing levels remain constant, so the same number of workers now have to detect fraud in a much higher claims volume.
  4. Fighting fraud is not the claim handlers’ core job responsibility. The claim adjuster’s job is to adjudicate a claim, get it settled and make the customer happy. Finding fraud puts adjusters in an adversarial situation. Some are uncomfortable with looking for fraud because they don’t like conflict. A system that detects fraud enables adjusters to focus on their areas of expertise.

In the past, insurance organizations relied heavily on their experienced claims adjusters to identify potentially fraudulent claims. But since fraudsters are turning to technology to commit crimes against insurance companies, carriers need to turn to technology to help fight them. Humans will still be a critical component of any fraud detection strategy, however. Today, insurance organizations need a collaborative human-machine approach, since they can’t successfully fight fraud with just one tactic or one system. To fight fraud, humans need machines, and machines need human intervention

Here’s how regulatory intelligence aids strategic decision-making in real time

regulatory intelligence

Data is all around us. It’s created with everything we do. For the life sciences industry, this means data is being collected faster and at a greater rate than ever before. Data takes the form of structured content — from clinical trials, regulatory filings, manufacturing and marketing, drug interactions and real-world evidence — with regard to how drugs are used in healthcare settings. It also is found in unstructured content from the internet of things (IoT), such as social media forums, blogs and so on.

But having massive quantities of data is useless without the regulatory intelligence to make sense of it. Let’s define what we mean by regulatory intelligence. This is about taking multiple data sources and feeding those into a regulatory system that can look at the data, analyze it, make use of it, collect information from it, then take that information to distribute it where it needs to go. This might be to the regulatory agencies requesting updates or information about the drug portfolio to satisfy compliance mandates, it might be to partners that you’re working with, such as trading partners, or it might be consumed internally.

Although referred to as regulatory intelligence, it encompasses many other areas of the product life cycle, including clinical research and development for detailed analysis and safety and pharmacovigilance for signal detection.

Life sciences companies can leverage these different types of data for real-time decision making to protect public safety, respond to supply shortages, protect the brand, advance the brand — for example, into new indications or new markets — and for many other purposes. In this blog, I’ll explore some of these uses of regulatory intelligence in greater depth.

Know your target

Since data is consumed across the life sciences in different ways by different people and different functions, getting to the point of intelligence first requires knowing the target and objective. If there is real-world data indicating adverse events that weren’t detected in clinical trials, having that intelligence early on allows companies to act accordingly — both to protect public safety and to safeguard brand reputation. What action the company takes will depend on what the data shows, as well as what the agencies require. For example, it might simply be to reinforce a message about avoiding other medications or foods while undergoing a specific treatment or it might require a broader response.

Another way data can be leveraged for real-time strategic decision making is to advance the brand. For example, IoT data or data held by the authorities might show weakness in a competitor’s product or weakness in the market — perhaps a gap in a region the company has begun targeting. By leveraging that intelligence, companies can take advantage of those gaps or competitor weaknesses and promote their brand as a better alternative or prepare a new market launch.

Regulatory intelligence might also shine light on other potential indications for your product. These insights might be gathered from IoT sources, such as physician blogs, or from positive side effects observed in clinical trials. The most famous example is Viagra, which initially was studied as a drug to lower blood pressure. As was the case here, not all side effects are negative, and during clinical studies an unexpected side effect led to the drug’s being studied and ultimately approved for erectile dysfunction. Having that regulatory intelligence available gives you the leverage to make the case for expanding clinical studies into new indications and extending therapeutic use.

Adding Real-Time Intelligence

From data to intelligence

Now that we have explored the definition of and some purposes for regulatory intelligence, we should also look at how you get from that point of data to intelligence. An important first step is to deploy the right analytical tool to sift through that data and pull out relevant information. It’s equally important to know how to make use of that data, and that requires knowing your end goal and narrowing the scope of your data search to eliminate extraneous data.

Time and resources can also be saved by leveraging automation to collect data for analysis. Since data is continuously being created, updated and pushed out, automated robotic processes make it possible to keep up to date with the latest findings and pull relevant data into your regulatory operational environment.

Regulatory intelligence is the key to real-time strategic decision making across all areas of research and development. Its importance to the organization can’t be overstated.

Digital health, not genomics! The future of precision medicine.

genomics

What does the term precision medicine mean to you? Typically, people think of precision medicine as being about genomics, but it goes well beyond molecular biology to encompass everything that moves us away from a one-size-fits-all approach to medicine. As far back as 1969, Enid Balint, formerly in charge of the training and research course for general practitioners at the Tavistock Clinic in London, published a paper on “The possibilities of patient-centered medicine,” and described precision medicine as the field that understands the patient as a unique human being.

The question, therefore, is: How do we do that? Certainly, genomics has been widely touted. But another area at the forefront of precision medicine is digital health technology, which Steven Steinhubl, MD, of Scripps Research Translational Institute, addressed in his presentation, “Precision Medicine and the Future of Clinical Practice.” Digital technology moves us in the direction of understanding each patient and away from the current practice of defining health in ways that make little sense to many people. Further in this blog, I am expanding on key elements of Steven’s talk to present a different perspective on precision medicine. While many of the messages in this blog have been raised by Steven, I’d like to offer my perspective as well.

So, what exactly is wrong with current practice in our healthcare system? For starters, the current model is based on a model in which, when you get sick or hurt, you see a doctor and you get fixed. There is little to no incentive for doctors to keep you healthy, and the system rewards them on what is called “activity-based funding” rather than “outcome-based funding” or “value-based care”.

As for population-based benchmarks, they actually don’t work for you as an individual. Let’s take wellness recommendations, such as walking 10,000 steps a day or eating a certain amount of proteins and carbohydrates each day. We know that some people need more and some need fewer carbohydrates and that the 10,000-step benchmark is fairly meaningless at an individual level.

Time to stop the generic trials

As mentioned by Steven, precision medicine is, in fact, already here in several settings. The most prominent is optometry, where an eye exam determines your specific needs, and an optometrist prescribes a pair of glasses tailored entirely to your current condition. You can also pick a model of frame and material that fits your lifestyle (e.g., sports or work) and your taste in fashion. Without this specific focus, you would end up with a generic pair of glasses that might not suit your needs and lifestyle.

Medicine needs to adopt the same approach by moving away from a generic approach to clinical studies and towards trials that focus on individual responses to therapy. In his article titled, Personalized medicine: Time for one-person trials, Nicholas J. Schork looks at the 10 most-prescribed drugs and notes that for every person they help, they fail to improve the condition of between three and 24 people. Some drugs, such as statins, benefit as few as one in 50 people, and some are even harmful to certain ethnic groups because clinical trials have typically focused on participants of European background.

Dosage is also seldom geared towards the individual. We know it’s possible to do this because the company provides dose recommendations based on pharmacokinetic drug models, patient characteristics, medication concentrations, and genotype.

Generally, however, we don’t know who will benefit from a drug and who won’t. While genomics plays a key role, there are multiple other factors that have an impact on outcomes, including our environment (e.g., city vs. rural), having access to good produce or being limited to convenience store food (e.g., doughnuts vs. fruits and veggies), whether we live in a cold or hot climate, whether we live in an industrial area with pollution, and what our work and family environment is like. Taking all these factors and more into account is essential if we are to treat each person as unique.

With the growing realisation about these effects, more clinicians are turning to digital technology, deploying internet of things (IoT) sensors and smartphones to improve patient outcomes. A study of 2,000 Americans shows that the average person uses his or her smartphone 80 times per day, so why not leverage it as part of a care plan? The fact is that people are already using their phones for health, with one out of 20 Google searches being health-related.

precision medicine

Setting baselines with sensors

Sensors and apps are being used by many people to check their vitals and provide far more relevant information than using standard measures for what is normal with sleep patterns, heart rate, blood pressure, glucose, temperature and stress. The context in which these measures are taken varies dramatically. For example, maybe it is normal for my stress and blood pressure level to rise when I’m rock climbing, and perhaps a pregnant woman can expect her sleep pattern to change.

Expanding on Steven’s idea, wearable IoT devices are redefining the human phenotype (i.e., all of the observable physical properties of an individual) by performing unobtrusive and continuous monitoring of a wide range of characteristics unique to each of us. This will allow us to define our “normal” blood pressure when we are stressed. After all, do you really need to worry if your blood pressure rises when you’re stuck in traffic after a busy day at the office?

Sensor technology enables continuous monitoring, so you can create a baseline and compare your own readings. When something doesn’t feel right, you’ll be able to go back and compare it to a day when you did feel right the month before. This is a far better measure of your own health.

Genomics Is Evolving

For example, a study into temperature shows that although your normal temperature should be around 37 degrees C, the normal temperature of a person can vary from 33.2 degrees C up to 38.4 degrees C. This means that if your normal temperature is 33.2 degrees C and you have a 37-degree C temperature, you’re having a pretty severe fever, but most doctors won’t realise this because they don’t know your normal temperature.

Another study shows that although the average daytime heart rate is around 79, the normal heart rate of a person varies from 40 to 90. This makes a big difference when treating a patient for a heart condition. This data comes from Fitbit’s analysis of 100,000 persons’ resting heart rates. So obviously, you can’t apply a population average to your own body. This is important because with the trends in your heart rate, we’d be able to find early signs of influenza, for example.

The challenge for people who have wearables (like me, yeah, I own a Fitbit … how cool am I?), is that we’re not quite sure what to do with all that data.

Following this trend, the National Institutes of Health in the United States has created the All of Us Research Program, the largest precision medicine longitudinal study ever performed, which aims to follow 1 million people from all walks of life for decades. The program will provide a set of IoT wearable sensors to the participants and then correlate this data with their clinical data from the healthcare ecosystem — hospitals, family practitioners, specialists, etc.

This study differs from your typical research study because this program will provide insights on the data to its participants, so they can improve their health in real time.

Today, anyone has access to wearable technology; it’s relatively cheap and easy to use, and it gives you real-time insights into your own health. Don’t be afraid to build your own baseline and talk to your doctor. As more people and clinicians embrace wearables and apps, we’ll start to see a broader shift towards precision medicine supported by both genomics and digital health.

How to make your enterprise analytics platforms more data democratized

Managing Enterprise Analytics

It wasn’t that long ago that data was a necessary but costly business byproduct that many companies shelved on leftover and decommissioned hardware, and only because they were legally required to do so. That’s changed, of course. Data’s value has grown exponentially in just the last few years because we’ve found that when you combine, analyze and exploit it in the right ways, it can tell you some amazing things about your company and your customers.

A big step in that direction is the concept of “data democratization.” The idea is simple. When you make data available to anyone at any time to make decisions, without limits related to access or understanding, you’re able to realize the full value of the data you maintain. Where IT was once the gatekeeper of data, new tools and technologies help any user gain access. Insights from that data can be developed by anyone, not just a data engineer or data scientist.

Case in point: Many analytics platforms offer some level of universal access to information, but the ability to use it is inherently restricted to people who understand how to use complex analytics tools. However, self-service tools, like Zaloni, are helping to democratize those analytics platforms. By combining drag-and-drop user interfaces with a powerful data catalog used to search for data, these tools can help non-technical users identify relevant data sources and create new datasets tailored specifically for an analytics task.

 

Analytics

Data democratization isn’t just a benefit for end users, it liberates data scientists as well. With users able to run their own queries, data scientists and engineers can spend more time identifying data sources, preparing them for ingestion, and cleaning and documenting them for use.

Implementing modern, self-service enabled tools raises new questions about security and privacy, so it’s important for companies to have governance in place that can ensure data is carefully managed. Additionally, anyone who plans to use these tools still needs to receive training—not only on how to use the tools, but how to ask questions and seek insights that are valuable to the company. Having governance in place for your self-service tools ensures data privacy and data quality, provides data lineage, and allows a company to provide role-based access control to data. Zaloni’s UI, for example, ensures self-guided access to data to easily get answers to questions and access pertinent data. In today’s highly regulated world, right-sized data governance and role-based security have become a requirement, not just a nice to have.

Many companies have been accumulating vast troves of data that contains a lot of unrealized value. Implementing tools that give everyone access to that data and help them explore new ideas and connections is likely to result in some surprising and valuable discoveries.

error: Content is protected !!