Afraid of Machines? Chill! Don’t Be.

Rise of the machines

When water cooler conversation turns to movies and lands on The Matrix, what scene first comes to mind? Is it when the film’s hero-in-waiting, Neo, gains self-awareness and frees himself from the machines? Or Agent Smith’s speech that compares humans to a “virus?” Or maybe the vision of a future ruled by machines? It’s all pretty scary stuff.

Although it featured a compelling plot, The Matrix wasn’t the first time we’d explored the idea of technology gone rogue. In fact, worries about the rise of the machines began to surface well before modern digital computers.

The rapid advance of technology made possible by the Industrial Revolution set off the initial alarm bells. Samuel Butler’s 1863 essay, “Darwin among the Machines,” speculated that advanced machines might pose a danger to humanity, predicting that “the time will come when the machines will hold the real supremacy.” Since then, many writers, philosophers and even tech leaders have debated what might become of us if machines wake up.

Rise Of Machines

What causes many people the most anxiety is this: We don’t know exactly when machines might cross that intelligence threshold, and once we do, it could be too late. As the late British mathematician I. J. Good wrote, designing a machine of significant intelligence to improve on itself would create an “intelligence explosion” equivalent to, as he put it, allowing the genie out of the bottle. Helpfully, he also argued that because a super intelligent machine can self-improve, it’s the last invention we’ll ever need to make. So that’s a plus — right?

There are other perspectives on the matter — that fears of a machine-led revolution are largely overblown.

Like technological advances that came before, artificial intelligence (AI) won’t create new existential problems. It will, however, offer us new and powerful ways to make mistakes. It’s smart to take some preventive measures, like putting in alerts that tell you when the machine is starting to learn things that are outside [of] your ethical boundaries.

Autonomous Driving Using AI

Premium Vector | Car modern interior, cockpit view inside. illustration. artificial intelligence.

Tech companies and the auto industry are working hard in tandem to make autonomous driving a reality by the early 2020s. Driverless cars with various levels of human participation will roll out in stages over the next few years, with fully-autonomous SAE Level 5 driving on the scene by 2030.

Today, most automotive manufacturers have achieved Level 2 assisted driving where the car can manage simple scenarios, like active lane centering and parking assistance, itself. Fewer manufacturers provide Level 3 autonomous driving where the car can autonomously navigate a traffic jam or roadways to a destination. For both levels, human drivers can take the wheel if they choose.

Everything to know about the future of self-driving cars - Maclean's

The limitations of AI that prevent advancement to fully-autonomous driving

From an engineering standpoint, Level 3 autonomous driving is powered by two things: hard-coded structured programming models mostly written for embedded systems and deterministic rules that make decisions supported by neural networks.

These two things combine to build AI driving agents, but with at least five important limitations:

  1. Lack of perception and behavior intelligence compared to humans. Unlike existing AI agents trained with machine learning (ML), humans don’t need thousands of images of trees, for example, to recognize a tree or identify a driving situation.
  2. Low accuracy performance. With existing tools, the probability of steering accuracy decreases as more autonomous driving functions and components get added. The complex real driving system will deliver only about 60 to 70 percent accuracy performance on motion control for steering and acceleration, well short of what’s required for fully autonomous driving.
  3. Inability to cope well with complexity. Deterministic rules are usable in closed environments, such as a contained driving course, but can’t capture the complexity of real-world driving situations.
  4. Require too much data. Usually ML models require enormous amounts of data, which are too expensive and difficult to collect and move over existing corporate networks.
  5. Take up too much run-time, CPU/GPU processing and address exabytes of storage resources. It takes a lot of time and power to process the large volumes of automotive data and learn from them – and that’s often not cost-effective.

For the industry to evolve toward fully-autonomous driving, technologists have to develop an AI model that mirrors human driving behavior. In doing so, we need to guarantee a deterministic behavior that always produces the same result from the same input. The industry needs a new approach.

The big question then remains: How will the car act autonomously and intelligently in real time, in the real world?

We believe a big part of the answer lies in adapting knowledge of the human brain to AI – and we have drawn much of the inspiration for our new approach from brain science research conducted by Danko Nikolic at the Frankfurt Institute for Advanced Studies (FIAS) in Germany .

Adapting innovative brain research to AI and the production of fully autonomous vehicles has emerged as one of the more exciting technology innovation breakthroughs we’ve seen in the past couple of years.

While it will take time, the benefits to society of producing self-driving vehicles – and simply learning more about how the brain works along the way – holds great promise for human progress.

Embracing Digital Transformation in Healthcare

Healthcare in digital transformation: digital and connected healthcare

Data-driven digital transformation opens opportunities for healthcare providers, policy makers and patients to move toward personalized healthcare by collecting and sharing new kinds of data. The next wave of productivity gains will not come simply from the delivery of information and messages from one place to another, but from the cross-linked aggregation of a more complete body of information. While the transition requires an investment in new technologies and new ways of doing business, the tools are rapidly maturing, and costs are coming down.

Denmark has been at the forefront of health data exchange for more than two decades. It began in 1994 with the creation of Med Com, a nonprofit organization owned by the Ministry of Health and various local government entities, which designed a range of healthcare data exchange standards. Med Com also enforced a strict policy of compliance, which led to countrywide adoption.

The initiative established Denmark as a global leader in data sharing and in empowering patients to be more involved in their own treatment. A key aspect of the country’s digital health initiative is its web portal, (or, which gives patients secure access to health data, including information on their treatments, visits to their doctors and notes from their hospital records.

Now the rest of the world has caught up, and the old standards are competing with those that have global reach, such as Health Level Seven International (HL7)’s Fast Healthcare Interoperability Resources (FHIR) standard. The new standards offer new data-sharing options and provide a far richer and more impactful set of options to healthcare providers.

While messages to and from clinical applications still have relevance, data sharing capabilities that enable true and actionable insights are growing in importance. The new data sharing models have the potential to transform healthcare, supporting digital transformation and moving healthcare toward progressive business models. The ability to share clear, consistent patient data is integral to driving patient-centric care, since patients now demand that healthcare organizations interact with them through multiple communication channels and have a deep understanding of factors that may affect their health.

But is this paradigm shift from messages toward rich data consumption easy for providers to adopt? Well no, if you still treat your electronic health records (EHRs), radiology information system (RIS) and laboratory information system (LIS) as big monoliths and data repositories where isolated and specific data is shared as messages.

Accelerating digitalisation in healthcare

Let me give an example. When I was a hospital chief information officer and wanted some new functionality in our EHR system, I needed to go through several hospital and vendor approval processes. In the end, it might take 2 years to get the change implemented, since the development roadmap didn’t leave much room for my innovative ideas. What I needed — but did not realize at the time — was access to data outside the applications where the data resided. The problem is that the apps themselves are not built for data sharing purposes and yet they contain vast amounts of invaluable data from across the enterprise — clinical, administrative, logistics, infrastructure, etc. In the past, medical use cases tended to be drawn from a single source, such as the EHR, but today’s use cases draw data from apps, medical devices and perhaps even sensors.

Here’s a problem that hospitals encounter — the outbreak of a contagious disease. To quickly mitigate a health crisis, the hospital needs to know within 12 hours what items and which people have been exposed: medical devices, staff, relatives, etc. To gain that insight, those managing the problem need data from a real-time location system, booking data, clinical data and data from a medical device database. But how do you make sure that the data is accessible outside of those apps? This is what digital transformation is about in healthcare — to set the data free and transform through innovation, with actionable insights, advanced analytics and other cutting-edge capabilities that are built upon your data.

It’s not only hospitals and clinicians who need these advanced insights. Today’s empowered patients require those insights to improve and manage their own care — whether that’s insights from digital devices for remote monitoring of their conditions or communicating with clinicians to help drive personal
health goals.

I believe it is time for health economies to look at how they will integrate and connect their existing systems with new digital technologies and merge the data locked inside to generate meaningful, actionable insights — both to inform personalized, patient-driven clinical care and to push the development of new treatments, pathways or services. Organizations that embrace change and transformation will emerge as winners in a world that demands first-class clinical care, better patient experiences and reduced costs.

While Denmark has led the way on health data exchange, advanced global standards and new digital technologies create the landscape for all health economies to embrace patient-centered care initiatives enabled by connecting data across the broader health ecosystem.

Far off the Clinical Silo: Presenting Comprehensive data

Transportable Silos - Ahrens Rural

For the past 15 years, the electronic health record (EHR) has been the cornerstone of digital hospitals and a primary repository for clinical data. The strategy for most healthcare providers has been to integrate as much data as possible into the EHR so that clinicians need to work with only “one single source of truth” when treating their patients.

However, there is a problem with EHRs. They are simply not designed to incorporate data from other sources, such as medical devices, asset management systems, location services, wearables or any other secondary non-clinical data. These sources of data offer tremendous value for improving health outcomes but, because of the difficulty in incorporating these data sources, are not being used.

In the coming years, most healthcare providers will prioritize aggregating new data sources (and thereby exploring new use cases). They will be seeking to not only implement new business models, but also to leverage the investments they’ve already made in new capabilities and technologies centered around internet of things (IoT) platforms, artificial intelligence, big data and robotics.

Storing such data in a single clinical silo, like an EHR, is neither practical nor efficient. Rather, healthcare providers need to focus on data aggregation and orchestration platforms (DAOP) that can collect data across the ecosystem and deliver actionable insights which will impact the day-to-day delivery of care. These platforms help move healthcare providers away from a reactive approach to managing healthcare data to one that’s more proactive, automated and insights-driven.

This is because the latest generation of DAOPs is architected as a collection of different open, modular and granular services, provisioned using the cloud and delivered as a service, which is enabled thanks to emerging server less designs. As these intelligent layers of software and data continue to be integrated in advances in big data, analytics, artificial intelligence and automation, and root themselves in agile/DevOps practices, they create low-cost, increasingly automated and smart agile workflows.

The EHR remains an important system — but it will be only one source of data feeding the DAOP. It will continue as a digital application focused on documentation, internal work flows and decision support within the hospital, while the DAOP will act as an engine, enabling new and innovative business models across the patient ecosystem.


Data, interoperability and APIs

Accessing and harnessing data across a variety of systems is the core of what the DAOP does. Once the aggregated data has been transformed and ingested into a centralized data lake, it can be consumed as microservices through a structured and well-managed API gateway.

The use of RESTful (Representational State Transfer) APIs, or RESTful web services, combined with the rapidly emerging FHIR (Fast Healthcare Interoperability Resources) standard, is accelerating and standardizing healthcare interoperability. FHIR-based RESTful APIs enable healthcare data-sharing across a federated and fragmented environment without necessitating data migration or locking up data in centralized solutions, such as EHRs. This is important because locking up data in EHRs and other such systems can reduce an organization’s control of the data and lead to prolonged time to value from new ideas and innovations.

apis in health care

Setting a complete healthcare API strategy

FHIR-based RESTful APIs are just the tip of the iceberg. On their own, they do not solve problems like data fragmentation, lack of standardization or sub-optimization across a complex ecosystem. To fully leverage the benefits of FHIR and APIs, more must be done than simply adding an API on top of an existing system of record, because the raw data is typically not in a state or format that lends itself to interoperability without manipulation. Most systems of records, such as the EHR, store data in a proprietary format, which needs to be manipulated before it can be shared. Maintaining a loose coupling of applications is best done outside of the EHR and enabled by a FHIR-based data lake.

To be successful, a complete healthcare API strategy should:

  1. Address the differences between the raw data from the systems of records and the data requirements of FHIR and other interoperability standards. Data transformation and translation against industry standards helps to standardize the data into a canonical form.
  2. Hide the complexity of the underlying data environment to the data consumer. APIs and the platform behind them provide the opportunity to encapsulate the systems of record so the end user doesn’t have to search through multiple systems to find the data.
  3. Consider opportunities to improve and enrich the raw data to maximize its use. Once data is aggregated and standardized behind the API, questions can be asked of the data that could not be asked before. Opportunities exist to enrich the data with provenance, security, privacy, conformance and other types of metadata. Enrichment can also come in the form of new insights gleaned from the data through analytics and evaluation of the data against knowledge bases.

APIs should be the gateway into a robust data processing platform capable of maximizing the usefulness of the data behind it. In this way, the challenges EHRs and other systems of record present with aggregating important sources of data can be overcome without undue burden and cost to the healthcare organization.

Digital Twin Technology in aircraft MRO

digital twin technology


Commercial air travel is safer than ever, according to a recent study published in Transportation Science. Data compiled by MIT professor Arnold Barnett shows that in 2017 only eight of more than 4 billion boarding air passengers around the world died in air accidents.  The risk of death for boarding passengers fell by more than half from 2008 to 2017 compared to the prior decade.

Aerospace companies nonetheless remain under tremendous pressure to continually improve flight safety because any fatality is a human tragedy, say nothing of the damage accidents can do damage to business, brand, and shareholder value. Ensuring aircraft are safe begins with design and engineering and extends through the manufacturing, maintenance, and repair processes.

But airplanes aren’t like a fleet of taxis consistently housed in a common garage and maintained by a group of workers familiar with the vehicles and who have ready access to repair and performance records. Planes can be located almost anywhere, yet they still need daily maintenance. That’s just the beginning of the challenges. As Steve Roemerman writes in Aerospace Manufacturing and Design:

Maintenance needs of one plane can differ drastically from another identical model. No two planes are exposed to the same conditions or usage and therefore do not need the same support on the same schedule.

An aircraft’s location, for example, directly influences the time between maintenance. Other factors – such as incomplete maintenance logs, unexpected issues, fleet usage, age, and weather – also make it difficult to create accurate maintenance schedules. In many cases, unexpected issues are only evident after starting repairs, causing major delays or strain on expensive personnel.

Bottlenecks in production and repair can be caused if aerospace manufacturers or airline maintenance and repair organizations (MROs) are unable to coordinate the availability of parts for a specific plane with the availability of the right specialists and mechanics. The inevitable result of prolonged maintenance delays is elongated manufacturing lead times or in-service aircraft availability reductions.

In addition to safety and aircraft availability, proper maintenance is important to flight schedules. Passengers are familiar with the frustration of waiting onboard as a repair team tries to fix an unexpected equipment problem that is delaying takeoff. Such delays have a negative impact on an airline’s reputation, particularly in a world where disgruntled passengers can vent their dissatisfaction on social media in real time from the tarmac.

Digital Twin on aircraft

To improve aircraft safety and to increase the efficiency of manufacturing, maintenance, and repair, aircraft manufacturers and MROs are harnessing tools such as artificial intelligence (AI), digital twins, and predictive analytics. Though the aerospace industry has been using analytics and digital twins for at least two decades, the proliferation of data from connected devices combined with AI-powered analytics and high-performance computing (HPC) has allowed engine and aircraft manufacturers, along with MROs, to achieve even greater cost and time efficiencies while continuing to raise the bar on passenger safety and satisfaction.

Digital twins are virtual models of products, processes, systems, services, and devices. These digital replicas produce data for building prescriptive models that can pinpoint problems and solve them in the virtual state. Connecting and tying this maintenance data in with the initial manufacturing design phase and the volumes of data collected during operation  allows aerospace manufacturers to optimize design and production processes, saving time and money and leading to better and safer aircraft.

The benefits of digital twin extend beyond the manufacturing process. Aerospace manufacturers are continually seeking ways to anticipate and address longevity requirements. These also encompass maintenance efforts. Building resilience into an aircraft benefits everyone. “When an aircraft engine manufacturer uses digital twin technology, the resulting data is used to predict exactly when to bring the aircraft in for inspection, “They can ingest engine usage for every flight, including the physics of the engine blades to see and measure how the engine is operating … virtually.”

While MROs have been slow to implement data-driven solutions, the projected increase in the world airline fleet along with the need to support both aging aircraft equipment and newer aircraft and systems – is forcing these companies to adopt smart technologies to take full advantage of growing volumes of sensor data, as well as data trapped in silos.

AI and predictive analytics can be deployed by MROs to leverage data created by connected aircraft engines and devices, allowing them to accurately forecast when parts can be expected to fail. Using prescriptive analytics, potential outcomes to a parts failure can be analyzed to determine the best solution.

“Robust analytics can drive streamlined material staging, more efficient labor planning, and more effective equipment check programs,” according to a white paper on how MROs can use data to drive actionable analytics. “When data and analytics streamline engine and component service, carriers can reduce AOG (aircraft on ground) times, minimizing the revenue impact of flight delays, and therefore maximizing uptime for crucial revenue-producing assets.”

By embracing AI, digital twins, and advanced, actionable analytics, players in the aerospace industry can position themselves to take full advantage of their data, technologies, workforces, and processes. This will enable an airline’s MRO to be more resilient.

Human-Centered AI


Unlocking human potential in the AI-enabled workplace

For all the hype and excitement surrounding artificial intelligence right now, the AI movement is still in its infancy. The public perceptions of its capabilities are painted as much by science fiction as by real innovation. This youth is a good thing, because it means we can still affect the course of AI’s impact. If we pursue AI purely with the goal of automating our lives, we risk pushing people aside. We would end up marginalizing human contributions, instead of optimizing them. Instead, we should pursue AI with the goal of augmenting our lives — as a means of benefiting humanity rather than devaluing it. Think of this path as human-centered AI, which seeks to free up people for more creative and innovative work. The technology is the same, but the goals of the systems we build are different. There’s a fine line between automation and augmentation. So, how can you ensure you’re pursuing human-centered AI? Start with how AI is built.


AI development models: The factory vs. the garage

When I was a kid, my dad’s hobby was woodworking, specifically building furniture, and he did it in our garage. What I remember was how he used the most of his space. My mom insisted that she be able to park her car in the garage, and that his tools should have homes when he wasn’t in the middle of a project. When he was in the middle of something, the garage could look a little chaotic, but it was never cluttered. Everything had a purpose and a home. The garage was designed to fit the needs and constraints of his environment.

Unfortunately, when creating AI we too often think of factories rather than garages. In any factory the goal is efficiency at scale. To achieve efficiency, design is separated from production, and then production is tuned for peak performance. This performance tuning makes many humans in factories simply extensions of tools. To judge whether a factory is set up well, the key metric of production is velocity.

A factory approach doesn’t make sense for something as abstract and virtual as AI development. Compared to a physical factory, software production is cheap to change over and doesn’t require capital investment to be ripped out and replaced. And turning developers into high velocity code assembly lines wastes a huge opportunity to cultivate highly trained, creative, innovative people.

An alternative is to approach AI development similar to the way my dad approached woodworking in his garage. A developer is not an executor of code but a creator. Tools exist to affect the creator’s vision, and the vision adapts based on the productive experience. Design and production work in tandem. The goal isn’t peak performance; it is innovation. The key metric is achievement.

You can recognize this “garage model” when you see people creatively building toward a project or goal. we invest time upfront making sure we all understand and can articulate the goal of project — the thing we are going to build. AI is more than code and technologies; it is an approach to problem solving. It’s a good approach that we think more people should use, but it’s still just a means to an end. The goal is what matters. When my dad started projects in his garage, he didn’t incrementally explore his way to a finished piece of furniture. He had a piece of furniture in mind and an initial plan of how he was going to make it.

Artificial Intelligence And Surveillance – Where Do We Draw the Line | Robot background, Computer robot, Artificial intelligence

The Applied AI Center of Excellence

When it comes to AI, a garage isn’t only a physical place. In the Applied AI CoE we run garages with teams of people sitting all over the world. A small AI garage will have a leader and a team of three to eight people. Larger garages will see that pattern fractal or reorganize outward to handle greater complexity. A key thing I have had to remember as an AI garage leader is that my role is not to direct work or control the ideas. This would create a factory and stymie innovation. Instead, my role is to set the initial vision or goal of a project and then prune ideas to maintain focus — in other words, my biggest contribution is to keep the garage clean. For me and other garage leaders, this can be difficult — especially if the leader was the one who originated the idea, but even when that was not the case, it can be hard to let go. Success belongs to the team; failure belongs to the leader. It’s natural to want to control away failure, but then the garage model would be lost.

This distinction between the factory and the garage is critical — performance vs. innovation. In a garage model, the people developing AI are centered in the process, and this creates a foundation for a system that reinforces human-centered AI. By increasing the number of people who have a personal stake in how AI is developed, we create an AI that has a stake in the people who use it.

What can a garage do for human-centered AI?

We have used AI garages to do such things as create apps that help people fight decision fatigue, recognize when someone is paying attention or is distracted, use the weaker constraints of the virtual world to reconnect people to the physical world, and create AI Starter libraries to share what we’ve learned.

These examples show that we believe effective AI capabilities don’t push people to the side. Instead, they place humans at the center, augmenting what people can do and how well they can do it. We achieve these things because our AI development model, the “garage model,” is similarly human-centered.

AI’s Possibilities in Healthcare: A Journey into the Future

Artificial intelligence in health care

Artificial intelligence (AI), machine learning and deep learning have become entrenched in the professional world. AI-style capabilities are being embraced and developed globally (over 26 countries/regions have or are working on a national AI strategy) for many different purposes — from ethics, policies and education to security, technology and industry, the scope is broad and multi-faceted. If, like many others, you are unclear as to what this new terminology means, below is a diagram depicting the hierarchy of AI, machine learning and deep learning for you to consider. In healthcare, the opportunities are vast and significant. Just from a financial point of view, AI has the potential to bring material cost savings to the industry.

But where should you start, and where do the opportunities lie?

AI And Human Accountability In Healthcare

Where to start with AI

First, look at where money is invested — in other words, which start-ups are attracting investors and what is their focus. Rock Health (the first venture fund dedicated to digital health) shows that the top four areas for venture capital investment between 2011 and 2017 were research and development, population health management, clinical workflow and health benefits administration. More than $2.7 billion was invested over 6 years, across 206 start-ups.

Another venture capital and digital health community, Startup Health, which also keeps track of global investments, found that funding is doubling every year for companies which use machine learning technology to enhance health solutions. The companies that focused on diagnostics or screening, clinical decision support and drug discovery tools received the largest share of funding for machine learning in 2018 — i.e., $940 million.

Delving into AI’s opportunities

Perhaps the biggest opportunity lies in assisted robotic surgery, with a potential cost saving of US$40 billion per year. AI-enabled robots can assist surgical procedures by analyzing data from pre-op medical records and past operations to guide a surgeon’s instrument during surgery and to highlight new surgical procedures. The potential benefit to the healthcare organization and the patient from this approach is noteworthy: a 21 per cent reduction in length of hospital stay because robotic-assisted surgery ensures a minimally invasive procedure, thus reducing the patient’s need to stay in the hospital longer.

Surgical complications were found to be dramatically reduced, according to one study into AI-assisted robotic procedures involving 379 orthopedic patients. Robotic surgery has been used for eye surgery and heart surgery. For example, heart surgeons have used a miniature robot, called the Heart Lander, to carry out mapping and treatment over the surface of the heart.

Another valuable use of AI is in virtual nursing assistants. One example is Molly, an AI-enabled virtual nurse that has been designed to help patients manage their chronic illnesses or deal with post-surgery requirements. According to a Harvard Business Review article, assistants like Molly could save the healthcare industry as much as US $20 billion annually.

Diagnosis is another exciting development for AI, with some promising findings on the use of an AI algorithm to detect skin cancers. A Stanford University report found that deep convolutional networks (CNNs) performed as well as dermatologists in classifying skin lesions. Other exciting breakthroughs in AI-assisted diagnosis include a deep-learning program that listens to emergency calls, analyses what is said, tone of voice and background noises to determine whether the patient is having cardiac arrest. Astonishingly, a study from the University of Copenhagen found the AI assistant was right 93% of the time, compared with 73% of the time for human dispatchers.

A fourth potential use for AI lies in digital image analysis, which could help to improve future radiology tools. In one example, a team of researchers from MIT developed an algorithm to rapidly register brain scans and other 3-D images. The result reduces the time to register scans with accuracy comparable to that of state-of-the-art systems.

With so much potential to be gained from AI, healthcare organizations will need to enhance their skills in AI and related capabilities. Decision-makers need to inform themselves about the potential and what is required to achieve those objectives, and then ensure that their teams are properly trained. Culture change in understanding how AI can be used to solve current and future problems is paramount to the future of next-generation healthcare and life sciences organizations.

AI in Transportation

AI in Transportation – Current and Future Business-Use Applications | Emerj

Why AI?

You may have heard the terms analytics, advanced analytics, machine learning and AI. Let’s clarify:

  • Analytics is the ability to record and playback information. You can record the travels of each vehicle and report the mileage of the fleet.
  • Analytics becomes advanced analytics when you write algorithms to search for hidden patterns. You can cluster vehicles by similar mileage patterns.
  • Machine learning is when the algorithm gets better with experience. The algorithm learns, from examples, to predict the mileage of each vehicle.
  • AI is when a machine performs a task that human beings find interesting, useful and difficult to do. Your system is artificially intelligent if, for example, machine-learning algorithms predict vehicle mileage and adjust routes to accomplish the same goals but reduce the total mileage of the fleet.

If you’re in travel and transportation, here’s how to make sense of the terms analytics, advanced analytics, machine learning and AI.

AI is often built from machine-learning algorithms, which owe their effectiveness to training data. The more high-quality data available for training, the smarter the machine will be. The amount of data available for training intelligent machines has exploded. By 2020 every human being on the planet will create about 1.7 megabytes of new information every second. According to IDC, information in enterprise data centers will grow 14-fold between 2012 and 2020.

And we are far from putting all this data to good use. Research by the McKinsey Global Institute suggests that, as of 2016, those with location-based data typically capture only 50 to 60 percent of its value.  Here’s what it looks like when you use AI to put travel and transportation data to better use.

Lack of Action in Congress on Autonomous Technology Could Hinder States, Lawmaker Warns | Transport Topics

Here’s what it looks like when you apply industrialized AI in travel and transportation.

Take care of the fleet

Get as much use of the fleet as possible. With long-haul trucking, air, sea and rail-based shipping, and localized delivery services, AI can help companies squeeze inefficiencies out of these logistics-heavy industries throughout the entire supply chain. AI can help monitor and predict fleet and infrastructure failures. AI can learn to predict vehicle failures and detect fraudulent use of fleet assets. With predictive maintenance, we anticipate failure and spend time only on assets that need service. With fraud detection, we ensure that vehicles are used only for intended purposes.

AI combined with fleet telematics can decrease fleet maintenance costs by up to 20 percent. The right AI solution could also decrease fuel costs (due to better fraud detection) by 5 to 10 percent. You spend less on maintenance and fraud, and extend the life and productivity of the fleet.

Take care of disruption

There will be bad days. The key is to recover quickly. AI provides the insights you need to predict and manage service disruption. AI can monitor streams of enterprise data and learn to forecast passenger demand, operations performance and route performance. The McKinsey Global Institute found that using AI to predict service disruption has the potential to increase fleet productivity (by reducing congestion) by up to 20 percent. If you can predict problems, you can handle them early and minimize disruption.

Take care of business

Good operations planning makes for effective fleets. AI can augment operations decisions by narrowing choices to only those options that will optimize pricing, load planning, schedule planning, crew planning and route planning. AI combined with fleet telematics has the potential to decrease overtime expenses by 30 percent and decrease total fleet mileage by 10 percent. You cut fleet costs by eliminating wasteful practices from consideration.

Take care of the passenger

The passenger experience includes cargo — cargo may not have a passenger experience directly but the people shipping the cargo do. Disruptions happen, but the best passenger experiences come from companies that respond quickly. AI can learn to automate both logistics and disruption recovery. It can provide real-time supply and demand matching, pricing and routing. According to the McKinsey Global Institute, AI’s improvement of the supply chain can increase operating margins by 5 to 35 percent. AI’s dynamic pricing can potentially increase profit margins by 17 percent. Whether it’s rebooking tickets or making sure products reach customers, AI can help you deliver a richer, more satisfying travel experience.

Applied AI is a differentiator

If we see AI as just technology, it makes sense to adopt it according to standard systems engineering practices: Build an enterprise data infrastructure; ingest, clean, and integrate all available data; implement basic analytics; build advanced analytics and AI solutions. This approach takes a while to get to ROI.

But AI can mean competitive advantage. When AI is seen as a differentiator, the attitude toward AI changes: Run if you can, walk if you must, crawl if you have to. Find an area of the business that you can make as smart as possible as quickly as possible. Identify the data stories (like predictive maintenance or real-time routing) that you think might make a real difference. Test your ideas using utilities and small experiments. Learn and adjust as you go.

It helps immensely to have a strong Analytics IQ — a sense for how to put smart machine technology to good public use. We’vefit built a short assessment designed to show where you are and practical steps for improving. If you’re interested in applying AI in travel and transportation and are looking for a place to start, take the Analytics IQ assessment.

The MLOps principles for AI Development

Automation & AI – Network Software & Technologies

Many companies are eager to use artificial intelligence (AI) in production, but struggle to achieve real value from the technology.

What’s the key to success? Creating new services that learn from data and can scale across the enterprise involves three domains: software development, machine learning (ML) and, of course, data. These three domains must be balanced and integrated together into a seamless development process.

Most companies have focused on building machine learning muscle – hiring data scientists to create and apply algorithms capable of extracting insights from data. This makes sense, but it’s a rather limited approach. Think of it this way: They’ve built up the spectacular biceps but haven’t paid as much attention to the underlying connective tissues that support the muscle.

Why the disconnect?

Focusing mostly on ML algorithms won’t drive strong AI solutions. It might be good for getting one-off insights, but it isn’t enough to create a foundation for AI apps that consistently generate ongoing insights leading to new ideas for products and services.

AI services have to be integrated into a production environment without risking deterioration in performance. Unfortunately, performance can decline without proper data management, as ML models will degrade quickly unless they’re repeatedly trained with new data (either time-based or event-triggered).

Professionalizing the AI development process

The best approach to getting real and continuous value from AI applications is to professionalize AI development. This approach conforms to machine learning operations (MLOps), a method that integrates the three domains behind AI apps in such a way that solutions can be quickly, easily and intelligently moved from prototype to production.

What is MLOps? | NVIDIA Blog

AI professionalization elevates the role of data scientists and strengthens their development methods. Like all scientists, these professionals bring with them a keen appreciation for experimentation. But often, their dependence on static data for creating machine learning algorithms –which they developed on local laptops using preferred tools and libraries – impedes production AI solutions from continuously producing value. Data communication and library dependency problems will take their toll.

Data scientists can continue to use the tools and methods they prefer, their output accommodated by loosely coupled DevOps and DataOps interfaces. Their ML algorithm development work becomes the centerpiece of a highly professional factory system, so to speak.

Smooth pilot-to-production workflow

Pilot AI solutions become stable production apps in short order. We use DevOps technology and techniques such as continuous integration and continuous delivery (CICD) and have standard templates for automatically deploying model pipelines into production. By using model pipelines, training and evaluation can happen automatically if needed – when new data arrives, for instance – without human involvement.

Our versioning and tracking ensure that everything can be reused, reproduced and compared if necessary. Our advanced monitoring provides end-to-end transparency into production AI use cases (including data and model pipelines, data quality and model quality and model usage).

Using our innovative MLOps approach, we were able to bring the pilot-to-production timeline for one U.S. company’s AI app down from six months to less than one week. For a UK company, the window for delivering a stable AI production app shrank from five weeks to one day.

The transparency of AI solutions, and confidence in their agility and stability, is critical. After all, the value lies in the ability to use AI to discover new business models and market opportunities, deliver industry-disrupting products and creatively respond to customer needs.

Significance of Data ethics in healthcare

Is medicine ready for artificial intelligence? | ETH Zurich

Over the past few years, Facebook has been in several media storms concerning the way user data is processed. The problem is not that Facebook has stored and aggregated huge amounts of data. The problem is how the company has used and, especially, shared the data in its ecosystem — sometimes without formal consent or by long and difficult-to-understand user agreements.

Having secure access to large amounts of data is crucial if we are to leverage the opportunities of new technologies like artificial intelligence and machine learning. This is particularly true in healthcare, where the ability to leverage real-world data from multiple sources — claims, electronic health records and other patient-specific information — can revolutionize decision-making processes across the healthcare ecosystem.

Healthcare organizations are eager to tap into patient healthcare data to get actionable insights that can help track compliance, determine outcomes with greater certainty and personalize patient care. Life sciences companies can use anonymized patient data to improve drug development — real-world evidence is advancing opportunities to improve outcomes and expand on research into new therapies. But with this ability comes an even greater need to ensure that patients’ data is safeguarded.

Trust — a crucial commodity

The data economy of the future is based on one crucial premise: trust. I, as a citizen or consumer, need to trust that you will handle my data safely and protect my privacy. I need to trust that you will not gather more data than I have authorized. And finally, I have to trust that you will use the data only for the agreed-upon purposes. If you consciously or even inadvertently break our mutual understanding, you will lose my loyalty and perhaps even the most valuable commodity — access to all my personal data.

Unfortunately, the Facebook case is not unique. Breaches of the European Union’s General Data Protection Regulation (GDPR) leading to huge fines are reported almost daily. What’s more, the continual breaches and noncompliance are affecting the credibility of and trust in software vendors. It’s not surprising that citizens don’t trust companies and public institutions to handle their personal data properly.

The challenge is to embrace new technology while at the same time acting as a digitally responsible society. Evangelizing new technology and preaching only the positive elements are not the way forward. As a society we must make sure that privacy, security, and ethical and moral elements go hand in hand with technology adoption. This social maturity curve might now follow Moore’s law about the extremely rapid growth of computing power, which means that — regardless of whether society has adapted — digital advancement will prevail.  But we can’t simply have conversations that preach the value of new technology without addressing how it will impact us as a community or as citizens.

Trust is a crucial commodity, and ensuring that trust means demonstrating an ethical approach to the collection, storage and handling of data. If users don’t trust that their data will be processed in keeping with current privacy legislation, the opportunities to leverage large amounts of data to advance important goals — such as real-world data to improve healthcare outcomes or to advance research in drug development — will not be realized. Consumers will quickly turn their backs on vendors and solutions they do not trust — and for good reason!

Rigorous approach to privacy

Health Data Privacy: Updating HIPAA to match today's technology challenges - Science in the NewsEthics and trust have become new prerequisites for technology providers trying to create a competitive advantage in the digital industry, and only the most ethical companies will succeed. Governments, vendors and others in the data industry must take a rigorous approach to security and privacy to ensure that trust. And healthcare and other organizations looking to work with software vendors and service providers must consider their choices carefully. Key considerations when acquiring digital solutions include:

  • How should I evaluate future vendors when it comes to security and data ethics?
  • How can I use existing data in new contexts, and what will a roadmap toward new data-based solutions look like? How will my legacy applications fit into this new strategy?
  • How will data ethics and security be reflected in my digital products, and how should access to data be managed?
  • How can I ensure I am engaging with a vendor that understands not only its products but can also handle managed security services or other cyber security and privacy requirements before any breach occurs?

Using technology to create an advantage is no longer about collecting and storing data; it’s about how to handle the data and understand the impact that data solutions will have on our society. In healthcare — where consumers expect their data to be used to help them in their journey to good health and wellness — that’s especially true. Healthcare organizations need to demonstrate that they have consumers’ safety, security and well-being at the heart of everything they do.

error: Content is protected !!