Dynamic Healthcare System: Blurring barriers between payer and provider

 Dynamic Healthcare System

Recent headlines have been full of news about major healthcare mergers and acquisitions, often involving newcomers to the industry, but also creating a convergence of traditional payerprovider and pharmaceutical benefit management companies.

Here are some of the latest examples in the changing healthcare scene:

CVS Health, a large pharmaceutical benefit manager, is purchasing Aetna, a large insurer, while Cigna, another large insurer, is acquiring Express Scripts, another pharmaceutical benefit manager.

Meanwhile, tech giants Amazon and Apple took some giant steps into the healthcare fray. Amazon entered into a joint venture with Berkshire Hathaway and J.P. Morgan Chase in an effort by all three to control employer costs, and Amazon also purchased PillPack, an online pharmacy company, and expects to expand services after obtaining state licenses. Apple showed its commitment to shake up the healthcare status quo by expanding its personal health record system, partnerships with hospitals and A.C. Wellness centers – all with a goal of gaining greater influence on healthcare consumption.

The convergence moves the industry away from the traditional separation of payers (health insurance companies and self-insured employers) and providers. Typically, payers are defined as the organizations that conduct actuarial analysis and manage financial risk by collecting premiums and managing payments for services delivered. Providers, meanwhile have typically been defined as healthcare practitioners and organizations that deliver and bill for services, including inpatient, outpatient, elective and emergent.

Those narrow definitions have been shaken up in the post-Affordable Care Act (ACA) world. In the past, the focus was on fee-for-service and capitated contracts under which HMOs or managed care organizations paid a fixed amount for its members to a provider. But the ACA moved the emphasis to value-based care, pushing more financial risk onto providers and away from payers. That means insurers and providers also need to consider how they manage pre-existing conditions and use risk scoring to determine the likely needs of their patients, as their approach can make the difference between profitable success and unprofitable failure.

In this new and complex environment, mergers and acquisitions are seen as a way for both providers and payers to build up their capabilities and respond to the need to enhance patient care, improve population health and reduce costs.

For traditional healthcare incumbents, we believe this also means using a “secret” weapon non-traditional players already leverage: data analytics.

Better data and analytics life cycle management can yield the insights payers and providers need to balance their priorities and deliver value-based care.

payer and provider

How to balance risk and patient outcomes

But first, what do all of these changes entail, and how do they take providers and payers beyond their narrower definitions?

In the post-ACA world, providers are looking to take more financial risk as their actuarial capabilities improve. This would allow them to negotiate more effectively with payers to achieve care outcomes objectives while balancing reimbursement and risk.

Payers, meanwhile, are acquiring doctors’ offices and other providers, or combining with retail clinics and other points-of-care to combine care delivery with financial risk management. To accomplish these goals, payers need to take a more active role in managing the healthcare professionals that they employ as well as the patients who visit those practitioners. Having access to the care delivery setting also allows for greater accuracy.

Managing these activities – by both the provider and the payer – needs to go beyond just financial management. It needs to include operational excellence, using robust data analytics to communicate with people and organizations delivering care. It also requires having performance-level agreements and bidirectional communication in place to measure and monitor reasonable objectives set by both payer and provider. Indeed, collaboration and communication will be crucial to overcome tensions that are building as providers try to deliver on value-based contracts. Finding a way to integrate insights from the back-end will help to ensure both the payer and provider perspectives are understood.

Use data to your advantage

A balance between the needs of the provider and the payer – while prioritizing the needs of the patient – will require change management and deeper insights on what works, what doesn’t and how outcomes for all stakeholders can be adjusted and improved. Those insights must be based on hard data, which will require more robust data, analytics and IT infrastructure. Organizations will need to deploy data and analytics life cycle management – including input, ingestion, management, storage and data utility. Integrated workflows make it easy to collect better, well-rounded encounter data, improving how providers work and increasing provider and patient satisfaction.

That data needs to encompass all parts of the healthcare continuum, meaning patient experience as well as provider and payer data. For this to happen, payers and providers must ensure better consumer engagement by spurring patients to take charge of their own care and using the data provided by patients to improve insights. Being able to see the end-to-end experience of the patient can affect the pieces accordingly.

Brave new healthcare environment

This brings us full circle to the changing industry dynamics and the entry of non-traditional players into the healthcare arena, since the big tech players such as Amazon, Apple and Alphabet know how to leverage data analytics to gain customer insights. As healthcare incumbents build and acquire assets, they will need to match these capabilities and build on their own strengths to ensure they aren’t left behind in this brave new healthcare environment.

The Internet of Things aiding Healthcare

Internet of things in healthcare

There’s so much talk across healthcare about electronic medical records (EMRs). For many, it seems to be the answer to every question, solving all problems of healthcare. At a recent Health Information Technology WA (Western Australia) conference in Perth, for example, three plenary speakers on the main stage were touting its benefits. Unfortunately, the reality is quite different.

Looking at global trends and the shift to value-based care, I believe there’s ample reason to question whether electronic medical records (EMRs) are actually the right approach, especially when the objective has changed from a hospital-centric approach to a patient-focused model that goes beyond the walls of the hospital. There’s also every reason to question whether it is a sound investment. For example, since 2011, the United States has spent $38.4 billion implementing 30-year-old EMR technology in hospitals, according to a 2018 Centers for Medicare & Medicaid Services report. Yet despite successfully computerising health practices, data is still largely locked into hospital systems, and sharing data across health systems remains difficult.

With the healthcare model shifting towards prevention and personalized care, providers and payers are rethinking their approach, and instead are turning to technologies such as the internet of things (IoT) to engage patients, improve outcomes and bring down the cost of care.

IoT in Healthcare: Benefits, Use-Cases and Challenges

From patient to customer

One healthcare organization that took a truly innovative approach to a customer-centric healthcare model is an academic health centre based in the United States. Renowned for its population health studies, the centre’s former chief executive officer wanted to engage patients as consumers, based on a simple objective — to keep those with chronic diseases out of hospital.

The project began with the creation of an innovation group, headed by a chief experience officer overseeing a multi-disciplinary team from customer-centric industries, such as hospitality, publishing, entertainment and automobiles. Most notably, there are no technologists from EMR/EHR (electronic health records) vendors within this group. To this progressive team, the health center added clinicians, who were given access to over 30 million patient records dating back 30 years to analyze the social determinants affecting chronic illnesses such as hypertension, diabetes, chronic obstructive pulmonary disease (COPD) and heart disease.

Based on a set of algorithms, the team was able to identify three social determinants that have the greatest impact on chronic disease:

  1. Access to transportation – Can you get to and from your job and school easily?
  2. Access to good food – Do you have access to quality produce or is the only store accessible from your house a 7/11 selling “convenience” food?
  3. Access to education – Is there a good school in your area with good teachers?

But how do you get good information from patients/consumers on these issues, given that surveys typically have low participation, with only 30 to 40 per cent of people taking part?

Mobile apps and IoT devices are part of the solution. Unfortunately, most apps are focused on a single condition or health issue, rather than factors that influence the patient’s overall health: socio-economic determinants, your environment, health behavior, as well as the quality of healthcare you receive.

Three months later, the innovation group released a mobile app as a proof of concept.

As part of the programme, patients were given a kit that included a Microsoft wristband, a Bluetooth blood pressure cuff, inhaler and weight scale, all connected to the app. In addition to health monitoring data, the app also captured data on life style, such as whether the patient smokes, exercises, etc.

Scaling outcomes

The pilot was a huge success, but the next step was to scale it to 4,000 patients, which was going to be another significant challenge, considering that the nurse-to-patient ratio is about one nurse for 20 to 40 patients. So, the centre started looking at customer relationship management (CRM) solutions.

Once the digital platform was in place, the innovation group had to redesign a new operating model that would support these 4,000 patients. After testing a few configurations, the team landed on a “pod” model that consisted of one nurse and two health navigators — non-clinical support staff focused on customer relationship management. Because the system works by exception, the care coordinators are notified by the platform when an interaction with the patient is required. The rest is automated by the platform, sending reminders and analysing patterns by using IoT monitoring devices and advanced predictive analytic models.

Success with such a large group of people requires engaging with patients where they are and in a way they can relate to. Thanks to the data gathered, the center knew a lot about these consumers. For example, they knew that most prefer to be contacted by text messages and most were fans of the show, Game of Thrones. With this knowledge, on the evening of the season finale, they reached out to hypertension patients with a simple message: “Tonight is the big night for Game of Thrones, and we know you might get excited, so don’t forget to take your blood pressure before the show, and take your meds if required. Have a good night and enjoy the show!” As trivial as this seems, it is details like this that engage people and empower them to make life style changes.

After 12 months, the new platform and engagement model has given the center huge insights, including enabling providers to predict future chronic disease patients with high levels of accuracy, and it has delivered significant outcomes. Here are a few numbers that I find very compelling: The centre has achieved 95 per cent of customer satisfaction, a 23 per cent reduction in emergency services costs, and reduced the total cost of care by 36 per cent.

Increasingly, no matter the healthcare model, the objective must be to improve health outcomes and keep patients out of hospital as much as possible, not only because it’s better for the patient but also to improve financial outcomes and allow health centers and hospitals to focus on truly innovative, cutting-edge care delivery. That’s not something that can be achieved with an EMR.

Driving Digital Transformation through AI

 Digital Transformation

Rapid innovation and productivity breakthroughs require an accelerated digital transformation strategy that melds people, business processes, advanced analytics, and new human/machine interaction technologies.

Today, it is the supervised machine learning segment of AI that is generating the most economic value. But as digital transformation accelerates, the abundance of data that AI can consume will drive the speed of AI adoption even faster, including its unsupervised learning segment.

Ask Alexa to summarize the meeting minutes

We need only look at how quickly conversational AI (CAI) has become part of our everyday lives as we query Alexa, Siri or Cortana. But in the enterprise the interactions can be extremely complex, such as “Hey <CAI>, summarize the minutes and action items from the recording of the last board meeting.” We are limited by only our imagination and — significantly — access to high-quality, well-organized data.

The accelerated AI adoption will in turn drive better understanding of how to customize AI for the relevant business context and drive digital transformation to new levels. It will provide instant measures of business performance down to the smallest task, leading to more predictable business outcomes, as well as enhance productivity and 24×7 business operations through automation of business processes and algorithmic work.

Digital Transformation

Manage advanced analytics as assets

As AI permeates every facet of the organization, organizations will need industrialized AI with strong governance and data quality. They will need to manage analytics models as assets to avoid algorithmic bias, retrain analytics models in a timely manner and ensure that data privacy and regulatory policies are properly implemented.

As we become better at blending advanced analytics technologies with how we think and work, there will be massive implications for how we run our companies and live our lives. It will be up to all of us to make sure that advanced analytics are used for ethical purposes.

Organizations should define their long-term AI objectives, clearly understand where and how new business value will be created, and design their digital journey maps. Once a business outcome and measurable business value is identified, organizations should proceed with developing analytics and AI/Machine Learning models and implement them in business operations.

Here’s how AI is transforming business processes

Business process are transformed through AI

The rise of artificial intelligence (AI) to drive business value has been truly incredible in recent years. Enterprises that recognize the power of AI and know how to effectively apply it to their business can reap significant rewards in a quickly evolving and hyper-competitive marketplace.

Just a few years ago, analytics was all about gaining insights from data to help make better business decisions. More recently, enterprises have been seeing massive benefits from adding AI to the analytics mix that is designed to transform and strategically influence business processes. The effective application of AI within business process transformation can produce many benefits including more efficient operations, faster delivery and reduced costs.

But how is AI actually helping companies transform business processes?

There are two primary ways AI is doing this today.  Some enterprises are actively implementing AI programs company-wide as part of their core functionality. And there are other companies that are building AI into their business in a sequential, controlled way via managed proof-of-concepts (POCs) to address particular aspects of their operations.

Artificial Intelligence Is Transforming Business

AI is specifically used as a tool to speed up the corporate buying process. It does this in two ways: by making recommendations of suitable suppliers who should be invited to a tendering process and by quickly sifting through dozens of supplier submissions and creating a ranking system to identify the best supplier agencies for bespoke projects. This type of accelerated procurement can shave weeks off the selection process and additionally save up to 20% on project budgets.

On the other side of the spectrum is a large UK retail chain with thousands of stores across the country. This company is implementing AI in a controlled way through their corporate transformation department and rolling it out to each store. The retailer is conducting live trials of the system and can conduct A/B testing for different approaches for 10% or 20% of their stores from the get-go, which did not happen before.

This retailer is seeing success by implementing AI to assist with critical business functions such as setting prices and managing stock. AI is taking on the work previously performed by humans, including analyzing pertinent purchasing data to set prices intended to keep products flying off the shelves and boost profit margins.

From the two cases above, we’ve seen that currently the best performance in AI applications is achieved when AI is combined with humans; where AI does the core number crunching and recommendations and humans oversee the process. This helps humans concentrate on fine-tuning and making improvements on those initial recommendations.

Time and budget savings are achieved during the corporate buying process due to the presence of AI. And for the retailer, cost savings are clearly achieved as AI does the routine processing work that was previously performed manually by humans.

These uses of AI yield fewer mistakes and demonstrate how AI can support efforts to optimize spend, ultimately impacting the bottom line. For both companies, AI is the go-to solution for solving business problems.

Humans will always have a place in transforming business processes, that goes without saying. But AI is quickly becoming an invaluable automation tool to drive efficiencies and reduce costs.

Still No viable method of transporting data from autonomous cars tests

Driverless Cars

Behind the scenes at locations around the world the auto makers are running tests on autonomous cars for literally thousands of hours. The industry has poured more than $80 billion into R&D on autonomous cars over the last four years, so they are serious about making this happen.

Those of us working on these tests have one overwhelming challenge: how to manage all the data that gets generated during the tests. One eight-hour shift can create more than 100 terabytes of data. In a week of testing multiple cars, we’re talking about petabytes of data. And often — at rural testing centers, for example — Internet bandwidth speeds are simply insufficient to ensure that the data reaches our data centers in North America, Europe and Asia at the end of the test day.

Autonomous car

Right now, we have two main ways to transport data back to a data center. They are both cumbersome, but have different plusses and minuses. Until advances in technology make these challenges easier to manage, here’s what we do today:

  • Connect the car to the data center. Test cars generate about 28 terabytes of data in an hour and it takes 30 to 60 minutes to offload that data by sending it to the data center over a fiber optic connection. While this is a time-consuming option, it remains viable in cases where the data gets processed in somewhat smaller increments.
  • Take/ship the media to a special station. In many situations the data loads are too large and the fiber connections unavailable (e.g., at geographically remote test locations such as deserts, ice lakes and rural areas) to upload data directly from the car to the data center. In these cases we remove a plug-in-disk from the car and take it or ship it to a “Smart Ingest Station” where the data is uploaded to a central data lake. Because it only takes a couple of minutes to swap out the disks, the car stays available for testing. The downside of this option is we need to have several sets of disks, so compared to Option 1 we are buying time by spending money.

In three to five years we may get to the point where both options are outmoded by advances in technology that make it possible for the computers in the car to run analysis and select the needed data. If the test car could isolate the test-car video on, for example, right-hand turns at a stop light, the need to send terabytes of data back to the main data center would be alleviated and the testers could send these smaller data sets over the Internet.

Of course, we’re several years away from having such a capability. In the past year, IBM and Sony have been working on a 330 terabyte tape drive that promises faster and more resilient data storage in a form factor that can fit in the palm of your hand. Once such products are commercialized, it should make our lives a bit easier.

Ultimately, we’d like the ability to move our various equipment easily in and out of hotel rooms and carry it on plane trips in our pockets or briefcases. Today, the equipment is often clunky and hard to move around. While technology can help, we have to be realistic and understand the data challenges surrounding autonomous cars are likely to increase exponentially.  The challenges may grow, but at least sometime soon the gear we use won’t be so cumbersome that our muscles ache at the end of the day.

Five essential pillars of AI-enabled business

Artificial Intelligence (AI) in business

Successful AI implementations rarely hinge on the unique innovation of a specific algorithm or data science technique. Those are important factors, but even more foundational to successful AI enablement are the core data operations and enabling platforms. These act as the fuel and chassis of the AI machine that a business must build and evolve for continued competitive advantage.

Here are the five foundational elements to be addressed to enable a successful transformation to an AI-empowered business:

1. Define an integration strategy for embedding AI and analytic insights into business operations

Successful digital transformations focus on evolving and optimizing business operations through the better use of data assets combined with modern technologies such as machine learning, AI, and robotics. These paradigm shifts result in the creation of new operating patterns rather than simply more efficient legacy operations.  In this way, digital transformation represents the enterprise operations in the way the business wants to be run, rather than the way it has been running due to technical and operational limitations and barriers constraining it.

To go beyond siloed or single-use insights and fully benefit from AI and analytics, it must first be decided how the business desires/needs to function in the future.  Determining your business transformation priorities then evaluating the advanced technology and data science options for addressing them is a key step towards maturing and evolving to a data-driven enterprise. This understanding will identify the type of AI and analytics that will be the most beneficial for your business and the technology required to accomplish it.  Additional thoughts on overall data strategy can be found in the white paper “Defining a data strategy: An essential component of your digital transformation journey.”

AI in business

2. Establish a holistic data and analytics platform

Selecting and configuring an integrated set of technologies to support data management and applied analytics is a complex challenge. Fortunately, solutions to such technical integration have matured in recent years into pre-built core platform components and best practices that can be accelerated and augmented further through value-added third party software and partner services.

Cloud-based modular platform environments bring together technical flexibility and financial elasticity with an ever-maturing technical set of capabilities, including interoperability across hybrid environments that include legacy on-premises deployments and geographical federation. In addition to open source components, such platforms include the option to integrate select native modules and commercial technology components for broader flexibility and a customizable architecture that can be deployed as prebuilt services for simpler adoption and integration.

The tools to support and enable AI integration into business operations are beginning to leverage the same capabilities they enable. For example, data pipeline tools are beginning to use machine learning (ML), metadata tools are using AI and ML to identify content and auto-generate the metadata on the fly, and user interfaces are embedding chatbot and digital assistant AI technology to guide end-users through the complexities of data science for accelerated insights.  By adopting toolsets and platforms that have embedded AI and analytics in their core, the use and integration of AI into business operations will be more natural and accelerated across the enterprise community.

3. Know your data

Fully understanding the data your enterprise has access to may seem like a fundamental need when supporting operational reporting and analytics within the enterprise. Many organizations, however, stop with simple source systems listings and maybe some high-level business definitions and schemas.

Truly knowing your data includes a lineage-based view of where the data comes from and what business process it represents, what operations are performed on it prior to your access, what transformations are performed thereafter, the associated level of quality and, of course, the core “Vs” of big data; volume, velocity, variety, veracity and value.

Building an easily searchable, enterprise-wide data catalog of information is one of the first steps towards empowering the enterprise with data. Exposing the catalog to a crowdsourced editing model ensures richer content and wider adoption of such information across the enterprise.

4. Control and govern your data

Understanding the types of controls and governance your data needs is a natural extension of knowing your data.  By reviewing the types of data and their business content with associated metadata, enterprises can align and define proper governance and compliance policies related to internal policies and to external standards such as HIPAA for healthcare, PCI DSS for secure payments, and PII and GDPR for data privacy.

It is also important that source data retains its original state integrity without over processing or over-filtering it. Aligning to the data pipeline workflow principles of “ingest, refine, consume” allows the same data to be leveraged efficiently for different uses with different policies and operational needs while ensuring security.  Such controls can also be extended to support and define quality standards required for using the available data and to trigger any necessary control processes to correct or adjust for deviations in such standards.

You can safeguard proper policy compliance, improve ease of use and increase trust and adoption by the end user community by ensuring that governance controls are built into your data management operations from the start.

5. Simplify access to your data

To further expand the adoption of AI and analytics, it is important to simplify and automate data workflows and the use of analytical tools. Reducing manual process overhead can significantly improve time to market and quality of results. Providing clear and flexible governance allows enterprises to control such access without it becoming a barrier for use.

Self-service leads to rapid user community adoption and better integration of data and insights into business operations. By reducing the dependency on IT resources for complex data integration and preparation tools, average business users can interact with the data through simple common interfaces and receive results in simple and easily consumable formats.

 

Once these foundational elements are in place, organizations can take full advantage of the unique value proposition offered by advanced analytics and AI. And they can do so with the confidence that the resulting solutions are enterprise-grade in their scalability, security, quality and usability. It is this kind of confidence that leads to business user adoption and, in turn, successful digital transformation.

Digital Technologies: Transforming pharma’s customer value chain

Biggest influencers in digital pharma in Q4: The top individuals to follow

Pharmaceutical companies struggle with a complex and, often, poorly managed partner, customer and distribution network. It’s not surprising, given the makeup of most large pharma companies. Large, often disconnected product portfolios are built through discovery — both internally and externally with academia and biotech partners — and global clinical trials, using a network of clinical research organizations, investigators, other experts and patients. Suppliers, distributors and often contract manufacturers are all integral to making and supplying products. And at the customer level, companies work with healthcare practitioners, pharmacists, payers and patients.

This web of partners and customers is growing in complexity — both logistically and from a compliance point of view. Yet this way of doing business remains the same. Communication and requests are conducted via email and through call centers without a connected and intelligent way of routing work and queries. Service level agreements are often poorly developed, and governance processes are often inconsistent across the distribution and customer ecosystem.

While different parts of the pharmaceutical business are deploying digital technologies, an opportunity exists to transform the customer and partner value chain with progressive digital tools and platforms. Customer service and support centers are now implementing artificial intelligence (AI) by analyzing both structured and unstructured data and also leveraging natural language processing (NLP) for omnichannel engagement models with the customers. But how is this actually achieved?

A single source of truth

Synchronizing systems around the customer for a customer-centric approach begins with bringing together data from disparate sources and creating a single view of the truth through a common data model. In this way, companies have a big-picture view of every customer service request, including the distribution chain.

Once the data is in place, the next step to improving customer engagement and ensuring regulatory compliance is to embed the common platform with digital tools and technologies. Combining NLP with AI, machine learning and workflow automation enables increased customer engagement with sound governance and improved compliance.

Digital solutions for the pharmaceutical industry

How do these digital technologies improve engagement and compliance?

Customer service support and the operations space have evolved over the years from a manual, labor-intensive and software-centered business model to a more dynamic multichannel customer engagement business model. This new model facilitates omnichannel engagement with the customer using multiple devices. AI- and machine learning-powered chatbots are being leveraged for quick response management, and digital capabilities are tightly integrated with intelligent workflow management tools.

For example, today a customer can raise a service request through an email, phone call or a text message, or even talk to a live chat agent. Digitally enabled customer service engagement centers can now seamlessly bring in the request from different channels into one homogenous customer engagement platform for action. From there, the request is processed using digital tools to identify the intent of the case or request — who it is aimed at, what the objective is – and to create groups in which to classify the case based on importance. This is achieved by using NLP to create an entity score, match this score with a subgroup and route it to the right place to ensure proper follow-up.

The customer may then choose to follow up with a phone call or through a chatbot or online feedback form. This is where an AI capability (or the more traditional customer service agent) should be able to view all of the various communication forms and frame the response accordingly.

To achieve this, AI and ML tools learn from previous interactions, continuously improving on the quality of responses. The AI learning also needs to extend to compliance, adherence to SLA guidelines, as well as any regulatory restrictions on what can and cannot be shared. For example, if a customer asks for the available stock of a particular drug, the pharma company is not allowed to address that question according to U.S. government regulations. So, the response needs to be framed appropriately. The rules will be different in each country, so the AI/ML-enabled automated response app should be able to learn and adapt accordingly.

Preconfigured responses based on the type of the request are then configured using data science and AI/ML techniques. AI and ML capabilities also help to determine the urgency or sensitivity of a case, and how best to ensure that compliance requirements are met within the timelines and SLA metrics.

In addition, analytics will play a key role in verifying, validating and improving customer service. Predictive models can be used to strengthen the human response team by understanding peak cycles, such as a new drug launch, natural disaster areas and so on.

By taking a progressive digital approach to managing the communication network with partners and customers, companies can mitigate many problems while improving customer engagement. This is enabled by having a single view of the customer, using robust data analytics capabilities thanks to AI and ML — to predict risk and compliance needs, and ensuring that the company is always ready for regulatory inspections and has the necessary information at hand. With the emergence of AI and ML techniques, it has become easier to achieve customer engagement needs with more enriched analytics and insights, thus allowing enterprises to not only automate customer engagements but also excel in customer experience.

Ways to better data processing in Self-Driving Cars

autonomous vehicle development

Autonomous cars promise to change the face of transportation, offering many more mobility options for individual motorists and companies alike. In moving forward with this new technology, our automotive clients have a very important challenge to overcome: processing the petabytes of data that gets collected during the development and testing of autonomous driving systems.

KPIs have always been important to car makers. They are necessary to attain road approvals and to track key competitive differentiators. With autonomous cars, however, car makers are accumulating – and must find ways to process and manage – 10, 20, sometimes 30 times the data as before.

As a result, they need much more efficient data analysis tools that can help them analyze the data for the specific autonomous car KPIs they are looking for. To make this happen, they need to take the following four steps:

  1. Make sure the car’s sensors are working. There are typically eight to 12 sensor systems in an autonomous vehicle test car. It’s important to look at the data at the very beginning of the workflow by checking the KPIs to ensure that the system works properly. Some of the KPIs car testers evaluate include the following: vehicle operations, safety, environmental impact and in-car network efficiency.
  2. Scale the workflow to process the data. Traditional architectures of automotive frameworks are not suited for the large-scale data processing workloads required for testing the algorithms used in autonomous car tests. In using traditional data storage methods, vehicle test data gets stored in NAS-based storage and gets then transferred to workstations, where engineers test algorithms under development. This process has two downsides:
    • Large amounts of data must be moved, requiring considerable time and network bandwidth.
    • Individual workstations do not offer the massive computing power required to return test results fast enough.

    Today, testers are extracting each frame of video data with its associated Radar, Lidar and sensor data by using open source Hadoop. The major benefit of Hadoop is that it scales processing and storage to hundreds of petabytes. This makes it a perfect environment for testing autonomous driving systems.

  3. Make the most of data analytics. In processing petabytes of automotive data, we have to look at how we present the data to higher level services. New data analysis tools can read different automotive formats to give us proper levels of access to the metadata and data. For example, say we have 700 video recordings, we now have tools that can pinpoint footage from the front-right camera alone to show how the car performed making right-hand turns. We can also use the footage to determine the accuracy of a model depicting the autonomous car’s perception of its ambient physical surroundings .
  4. Run the data analysis. In the end, we want to use data analysis tools to give R&D engineers a complete view of how the car has performed in the field. We want to generate information on how the systems will react under normal driving conditions.

Overcoming these data analysis challenges is critical. Manufacturers can’t obtain permits for releasing their cars until they can show that the cars performed up to certain standards in road tests. And when autonomous cars do start to hit the roadways in the next few years, auto manufacturers might need the KPIs they generated in testing. A few accidents are inevitable and, when questions arise, car makers can use KPIs to show the authorities, insurance companies and the general public how the cars were tested and that proper due diligence was performed.

Right now, there’s some distrust among the driving public of autonomous cars. It will take a massive public relations effort to convince consumers that autonomous cars are safer than traditional manually-driven cars. But proving that case all starts with the ability to process the data more efficiently

Autonomous cars promise to change the face of transportation, offering many more mobility options for individual motorists and companies alike. In moving forward with this new technology, our automotive clients have a very important challenge to overcome: processing the petabytes of data that gets collected during the development and testing of autonomous driving systems.

KPIs have always been important to car makers. They are necessary to attain road approvals and to track key competitive differentiators. With autonomous cars, however, car makers are accumulating – and must find ways to process and manage – 10, 20, sometimes 30 times the data as before.

self-driving vehicle technology

As a result, they need much more efficient data analysis tools that can help them analyze the data for the specific autonomous car KPIs they are looking for. To make this happen, they need to take the following four steps:

  1. Make sure the car’s sensors are working. There are typically eight to 12 sensor systems in an autonomous vehicle test car. It’s important to look at the data at the very beginning of the workflow by checking the KPIs to ensure that the system works properly. Some of the KPIs car testers evaluate include the following: vehicle operations, safety, environmental impact and in-car network efficiency.
  2. Scale the workflow to process the data. Traditional architectures of automotive frameworks are not suited for the large-scale data processing workloads required for testing the algorithms used in autonomous car tests. In using traditional data storage methods, vehicle test data gets stored in NAS-based storage and gets then transferred to workstations, where engineers test algorithms under development. This process has two downsides:
    • Large amounts of data must be moved, requiring considerable time and network bandwidth.
    • Individual workstations do not offer the massive computing power required to return test results fast enough.

    Today, testers are extracting each frame of video data with its associated Radar, Lidar and sensor data by using open source Hadoop. The major benefit of Hadoop is that it scales processing and storage to hundreds of petabytes. This makes it a perfect environment for testing autonomous driving systems.

  3. Make the most of data analytics. In processing petabytes of automotive data, we have to look at how we present the data to higher level services. New data analysis tools can read different automotive formats to give us proper levels of access to the metadata and data. For example, say we have 700 video recordings, we now have tools that can pinpoint footage from the front-right camera alone to show how the car performed making right-hand turns. We can also use the footage to determine the accuracy of a model depicting the autonomous car’s perception of its ambient physical surroundings .
  4. Run the data analysis. In the end, we want to use data analysis tools to give R&D engineers a complete view of how the car has performed in the field. We want to generate information on how the systems will react under normal driving conditions.

Overcoming these data analysis challenges is critical. Manufacturers can’t obtain permits for releasing their cars until they can show that the cars performed up to certain standards in road tests. And when autonomous cars do start to hit the roadways in the next few years, auto manufacturers might need the KPIs they generated in testing. A few accidents are inevitable and, when questions arise, car makers can use KPIs to show the authorities, insurance companies and the general public how the cars were tested and that proper due diligence was performed.

Right now, there’s some distrust among the driving public of autonomous cars. It will take a massive public relations effort to convince consumers that autonomous cars are safer than traditional manually-driven cars. But proving that case all starts with the ability to process the data more efficiently

Significance of “design for operations” approach for service-based IT

Service based IT companies

To deliver on digital transformation and improve business performance, enterprises are adopting a “design for operations” approach to software development and delivery. By “design for operations” we mean that software is designed to run continuously, with frequent incremental updates that can be made at scale. The approach takes into consideration the end-to-end costs of delivering and servicing the software, not just the initial development costs. It is based on applying intelligent automation at scale and connecting ever-changing customer needs to automated IT infrastructure. DevOps is the set of practices that do this, enabled by software pipelines that support Continuous Delivery.

 Design Operations

The challenge: Design for operations

Products and services pass through various stages of design evolution:

  • design for purpose (the product performs a specific function)
  • design for manufacture (the product can be mass produced)
  • design for operations (the product encompasses ongoing use and the full product life cycle)

Automobiles are a good example: from Daimler’s horseless carriage, to Ford’s Model T and finally to Toyota’s Prius (or anything else that’s sold with a service plan). Including the service plan means the auto maker incurs the costs of servicing the car after it’s purchased, so the auto maker is now responsible for the end-to-end life cycle of the car. Information technology is no different — from early code-breaking computers like Colossus, to packaged software such as Oracle, and then to software-based services like Netflix.

The key point is that software-based services companies like Netflix have figured out that they own the end-to-end cost of delivering their software, and have optimized accordingly, using practices we now call DevOps.

There are efficiencies that can be achieved only with software designed for operations. This means that companies running bespoke software (designed for purpose) and packaged software (designed for manufacture) have a maturity gap, where the liability is greater than the value. If that gap can be closed, delivery can be better, faster and cheaper (no need to pick just two).

It’s essential to close that gap, because if competitors can deliver better, faster and cheaper, that puts them at an advantage. This even includes the public sector, since government departments, agencies and local authorities are all under pressure to deliver higher quality services to citizens with lower impact on taxation.

The reason we “shift left”

A typical outcome of the design-for-purpose approach is that functional requirements (what the software should do) are pursued over nonfunctional requirements (security, compliance, usability, maintainability). As a result, things like security get bolted on later. In many cases, this lack of functionality starts to accrue as technical debt — that is, decisions that may seem expedient in the short term become costly in the longer term.

The concept of “shifting left” is about ensuring that all requirements are included in the design process from the beginning. Think of a project timeline and “shifting left” the items in the timeline, such as security and testing, so they happen sooner. In practice, that doesn’t have to mean lots of extra development work, as careful choices of platforms and frameworks can ensure that aspects such as security are baked in from the beginning.

A good example of contemporary development practices that support this is manifested when we ask, “How do we know that this application is performing to expectations in the production environment?” This moves way past “Does it work?” and starts asking “How might it not work, and how will we know?”

Enterprises need to adopt a “design for operations” model that includes a comprehensive approach to intelligent automation that combines analytics, lean techniques and automation capabilities. This approach produces greater insights, speed and efficiency and enables service-based solutions that are operational on Day 1.

All about Operationalized Analytics

Operationalizing Analytics

Organizations with a high “Analytics IQ” have strategy, culture and continuous-improvement processes that help them identify and develop new digital business models. Powering these capabilities is the organization’s move from ad hoc to operationalized analytics.

Seamless data flow

Operationalized analytics is the interoperation of multiple disciplines to support the seamless flow of data, from initial analytic discovery to embedding predictive and prescriptive analytics into organizational operations, applications and machines. The impact of the embedded analytics is then measured, monitored and further analyzed to circle back to new analytics discoveries in a continuous improvement loop, much like a fully matured industrial process.

An example of operationalized analytics is the industrialized AI utility depicted below. It enables automatic access and collection of data, ingesting and cleaning of the data, agile experimentation through automated execution of algorithms, and generation of insights.

DataOps

 

Operationalized analytics builds on hybrid data management (HDM), an HDM reference architecture (HDM-RA), and an industrialized analytics and AI platform to enable organizations to implement industrial-strength analytics as a foundation of their digital transformation.

Operationalized analytics encompasses the following:

  • Data discovery includes the data discovery environment, methods, technologies and processes to support rapid self-service data sharing, analytics experimentation, model building, and generation of information insights.
  • Analytics production and management focuses on the processes required to support rigorous treatment and ongoing management of analytics models and analytics intellectual property as competitive assets.
  • Decision management provides a clear understanding of, and access to, the information needed to augment decision making at the right time, in the right place and in the right format.
  • Application integration incorporates analytics models into enterprise applications, including customer relationship management (CRM), enterprise resource planning (ERP), marketing automation, financial systems and more.
  • Information delivery of relevant and timely analytics information to the right users, at the right time and in the right format is enabled by self-service analytics and data preparation. This improves the ease and speed with which organizations can visualize and uncover insights for better decision making.
  • Analytics governance is the set of multidisciplinary structures, policies, procedures, processes and controls for managing information and analytics models at an enterprise level to support an organization’s regulatory, legal, risk, environmental and operational requirements.
  • Analytics culture is key, as crossing the chasm from ad hoc analytics projects to analytics models integrated into front-line operations requires a cultural shift. Merely having a strong team of data scientists and a great technology platform will not make an impact unless the overall organization also understands the benefits of analytics and embraces the change management required to implement analytically driven decisions.
  • DataOps is an emerging practice that brings together specialists in data science, data engineering software development, and operations to align development of data-intensive applications with business objectives and to shorten development cycles. DataOps is a new people, process and tools paradigm that promotes repeatability, productivity, agility and self-service while achieving continuous analytics model and solutions deployments. DataOps further raises Analytics IQ by enabling faster delivery of analytics solutions with predictable business outcomes
error: Content is protected !!