From machine intelligence to security and storage, AWS re:Invent opens up new options.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways | AWS Security Blog

Technology as an enabler for innovation and process improvement has become the catchword for most companies. Whether it’s artificial intelligence and machine learning, gaining insights from data through better analytics capabilities, or the ability to transfer data and knowledge to the cloud, life sciences companies are looking to achieve greater efficiencies and business effectiveness.

Indeed, that was the theme of my presentation at the AWS re:Invent conference: the ability to innovate faster to bring new therapies to market, and how this is enabled by an as-a-service digital platform. For example, one company that had an increase in global activity needed help to accommodate the growth without compromising its operating standards. Rapid migration to an as-a-service digital platform led to a 23 percent reduction in its on-premises system.

This was my first re:Invent, and it was a real eye opener to attend such a large conference. The week-long AWS re:Invent conference, which took place in November 2018, brought together nearly 55,000 people in several venues in Las Vegas to share the latest developments, trends, and experiences of Amazon Web Services (AWS), its partners and clients.

The conference is intended to be educational, giving attendees insights into technology breakthroughs and developments, and how these are being put into use. Many different industries take part, including life sciences and healthcare, which is where my expertise lies.

re:Invent 2020 Liveblog: Machine Learning Keynote | AWS News Blog

This slickly organized, high-energy conference offered a massive amount of information shared across numerous sessions, but with a number of overarching themes. These included artificial intelligence, machine learning and analytics; serverless environments; and security, to mention just a few. The main objective of the meeting was to help companies get the right tool for the job and to highlight several new features.

During the week, AWS also rolled out new functionalities designed to help organizations manage their technology, information and businesses more seamlessly in an increasingly data-rich world. For the life sciences and healthcare industry — providers, payers and life sciences companies — a priority is being able to gain insights based on actual data so as to make decisions quickly.

re:Invent 2020 Liveblog: Machine Learning Keynote | AWS News Blog

That has been difficult to do in the past because data has existed in silos across the organization. But when you start to connect all the data, it’s clear that a massive amount of knowledge can be leveraged. And that’s critical in an age where precision medicine and specialist drugs have replaced blockbusters.

A growing number of life sciences companies recognize that to connect all this data — across the organization, with partner, and with clients — they need to move to the cloud. As such, cloud, and in particular major services such as AWS, are becoming more mainstream. There’s a growing need for platforms that allow companies to move to cloud services efficiently and effectively without disrupting the business, but at the same time make use of the deeper functionality a cloud service can provide.

Putting tools in the hands of users

AWS Control Tower | AWS Management & Governance Blog

One such functionality that AWS launched this year is Amazon Textract, which automatically extracts text and data from documents and forms. Companies can use that information in a variety of ways, such as doing smart searches or maintaining compliance in document archives. Because many documents have data in them that can’t easily be extracted without manual intervention, many companies don’t bother, given the massive amount of work that would involve. Amazon Textract goes beyond simple optical character recognition (OCR) to also identify the contents of fields in forms and information stored in tables.

Another key capability with advanced cloud platforms is the ability to carry out advanced analytics using machine learning. While many large pharma companies have probably been doing this for a while, the resources needed to invest in that level of analytics has been beyond the scope of most smaller companies. However, leveraging an observational platform and using AWS to provide that as a service puts these capabilities within the reach of life sciences companies of all sizes.

Having access to large amounts of data and advanced analytics enabled by machine learning allows companies to gain better insights across a wide network. For example, sponsors working with multiple contract research organizations want a single view of the performance at the various sites and by the different contract research organizations (CRO). At the moment, that can be disjointed, but by leveraging a portal through an observational platform, it’s possible to see how sites and CROs are performing: Are they hitting the cohort requirements set? Are they on track to meet objectives? Or, is there an issue that needs to be managed?

Security was another important theme at the conference and one that raised many questions. Most companies know theoretically that cloud is secure, but they’re less certain whether what they have in place gives them the right level of security for their business. That can differ depending on what you put in the cloud. In life sciences, if you are putting research and development systems into the cloud, it’s vital that your IT is secure. But with the right combination of cloud capabilities and security functionality, companies can get a more secure site there than they would on-premises.

The conference highlighted multiple new functions and services that help enterprises gain better value from moving to the cloud. These include AWS Control Tower, which allows you to automate the setup of a well-architected, multi-account AWS environment across an organization. Storage was also on the agenda, with discussions about getting the right options for the business. Historically, companies bought storage and kept it on-site. But these storage solutions are expensive to replace, and it’s questionable whether they are the best way forward for companies. During the re:Invent conference, AWS launched its new Glacier Deep Dive storage facility, which allows companies to store seldom-used data much more cost effectively than legacy tape systems, at just $1.01/TB per month. Consider the large amount of historical data that a legacy product will have. In all likelihood, that data won’t be needed very often, but for companies selling or acquiring a product or company, it may be important to have access to that data.

Video on Demand | Implementations | AWS Solutions

One of the interesting things I took from the week away, apart from a Fitbit that nearly exploded with the number of steps I took in a day, was how the focus on cloud has shifted. Now the discussion has turned to: “How do I get more from the cloud, and who can help me get there faster?” rather than: “Is the cloud the right thing for my business?” Conversations held when standing in queues waiting to get into events or onto shuttle buses were largely about what each organization is doing and what the next step in its digital journey would be. This was echoed in the Anteelo booth, where many people wanted more information on how to accelerate their journey. One of the greatest concerns was the lack of internal expertise many companies have, which is why having a partner allows them to get real value and innovation into the business faster.

Why Are These Big Name Brands Moving To The Cloud Technology?

Going to the Cloud: Stories from the Frontlines – Channel Futures

The economic turmoil caused by the pandemic has kickstarted the rapid adoption of cloud technology. Across the globe, companies in their housands are expanding the number of services they operate in the cloud in a bid to speed up digital transformation and put themselves in a better position to withstand the volatility of today’s marketplace. In this post, we’ll look at some major brands to discover why they have decided to migrate to the cloud over the last few months.

Coca-Cola

Coca-Cola - Wikipedia

Arguably the most recognisable brand in the world, Coca-Cola may have been making the same product for 128 years but its operations are strictly 21st century. Its manufacturing processes have long been massively automated and now, it has adopted a cloud-first policy with regard to IT.

As part of its digital transformation, the company has migrated to a hybrid cloud technology setup in a bid to reduce operational costs and increase IT resilience. This will enable it to deploy data analytics and artificial intelligence to provide it with insights that it can use to improve its services and operations.

Coca-Cola will use the migration to streamline its existing IT infrastructure and develop a company-wide platform for standardised business processes, technology and data. In order to integrate the public and private elements of its hybrid cloud, together with existing technology it plans to keep, it will deploy a single-dashboard, multi-cloud management system.

Finastra

Finastra - Wikipedia

UK-based fintech company, Finastra, is migrating to the cloud to accelerate not only its own digital transformation but those of its 8,000 global customers. The objective is to revolutionise the use of technology in the financial services sector by developing a platform that financial companies can use to speed up innovation and improve collaboration.

To achieve this, Finastra will migrate its entire customer base to the new cloud platform. From here, they will be able to create digital-first workplaces and provide their own clients with financial services and solutions, such as electronic notary services and electronic signatory, which are better suited to today’s digital world.

Major bank migrations: Deutsche Bank and HSBC

HSBC's reported job cuts signal that banks are struggling to find their postcrisis footing - MarketWatch

Two of the world’s major banks, Deutsche Bank and HSBC, have both announced plans for migrations over the last few weeks. A key element of its digital transformation, Deutsche Bank sees the cloud as being crucial for increasing revenue and minimising costs. It aims to make use of data science, artificial intelligence and machine learning to improve risk analysis and cash flow forecasting, as well as to develop digital communications that are easier for customers to interact with and which enhance the customer experience.

The German bank is also using the move to improve security, seeing it as a way to help it comply with data protection and privacy regulations and to ensure the integrity of customer data.

HSBC Holdings, the parent company of HSBC Bank, is adopting the cloud to benefit from its storage, compute, data analytics, AI, machine learning, database and container services, as well as for the cloud’s advanced security.

Its major goal is to provide more personalised and customer-centric banking services for its customers, for which it will develop customer-facing applications. It also intends to use the move to update its Global Wealth & Personal Banking division, develop new digital products and improve compliance.

Car manufacturer migrations: Daimler and Nissan

New Daimler boss could end Renault-Nissan partnership | Autocar

Two leading car manufacturers, Mercedes-Benz parent company, Daimler AG, and Nissan have also announced plans to adopt cloud technology. Daimler will migrate its after-sales portal to the public cloud to help it innovate and accelerate the development of new products and services for its global customer base, as well as to provide it with scalability. Like many other companies, it also sees cloud as being a secure platform and will use it to encrypt and store data to protect it from ransomware and hacking.

Nissan, meanwhile, is using the cloud primarily to help cut costs during the post-pandemic downturn. With poor sales throughout 2020, it views digital transformation as essential to remain agile and resilient.

The move will allow the car maker to store its vast quantities of data far less expensively than in-house and provide it with cost-effective, scalable processing resources. These it will use to undertake application-based, computational fluid dynamics and structural simulations which are needed to design its cars and test them for aerodynamics and structural issues. The cloud will also enable it to carry out performance and engineering simulations, helping it improve its vehicles’ fuel efficiency, reliability and safety.

UK public sector cloud initiative

IMImobile announces it has been included in the UK government G-Cloud initiative

The UK government has implemented a cloud-first policy in a bid to make the UK the world’s most digitally transformed nation. As part of the project, government departments, local authorities, the NHS, police and educational institutions will be encouraged to initiate cloud-based projects and take advantage of the speed, scalability and security of the public cloud.

To help bring this about, the government has established a digital marketplace on its website where public sector organisations can find approved service providers. Known as the G-Cloud (Government Cloud), these providers, which include eukhost, offer the advanced, secure and compliant cloud services, together with the technical expertise needed to make public sector digital transformation a reality.

Conclusion

As these use cases exemplify, cloud adoption and digital transformation are key to helping organisations cope with the impact of the current economic crisis and put them in a stronger position to innovate and prosper in the future. However, it is not just major brands that are making the move, businesses across the globe are moving quickly to take advantage of what cloud has to offer.

Cloud Necessary for Digital Transformation? – Here’s Why!

Why Cloud is an essential foundation of successful digital transformation?

Across the globe, organisations are acknowledging the need for digital transformation as new technologies, like data analytics, AI, ML and the IoT make traditional processes redundant and force unprogressive companies out of business. At the same time, shifting customer needs and behaviours demand companies undertake digital transformation in order to evolve. Without the adoption of cloud technology, however, much of this would not be possible. Here, we’ll explain why.

Organisations which have migrated to the cloud and undergone digital transformation experience both significant growth and improved efficiency. It has enabled them to develop new business models that keep them relevant and thriving in today’s dynamic and volatile marketplace. Thanks to cloud technology, they can innovate at pace, make informed, data-driven decisions and speed up the launch of products and services. What’s more, this is achieved more cost-effectively and efficiently.

1. Cost-effective IT solution

Cost Effective - WindSmart Systems

The cloud provides organisations with the opportunity to develop a much more cost-effective business model where the need to invest heavily in IT infrastructure is no longer required. By hosting their services and carrying out workloads on the infrastructure of their service provider, not only do they replace significant capital expenditure with less expensive service packages; they also forego many of the associated costs of operating a datacentre, including machine maintenance and server management.

2. Agility

The need for speed! | Geotab

The speed at which servers and software can be deployed in the cloud and the rapidity with which applications can be developed, tested and launched helps drive business growth. Additionally, this agility enables organisations to concentrate on more business-focused issues, such as security and compliance, product development or monitoring and analysis, instead of using up precious time and effort provisioning and maintaining IT resources. Together, these cloud attributes give companies a competitive advantage in the marketplace.

3. Scalability

Scalability Testing

Another key advantage that cloud brings to digital transformation is instant scalability. It provides businesses with a cost-effective, pay-per-use way of scaling up, on-demand, to ensure it always has the resources it needs to cope with spikes or to carry out large workloads. This means the expensive practice of purchasing additional servers to cater for busy periods but which are left redundant for much of the time is no longer necessary.

4. High availability

What is High Availability (HA) and Do I Need It? – Servers Australia

Today’s customers demand uninterrupted, 24/7 access to products and services and putting this in place is a key aim of many companies’ digital transformation. Similarly, some businesses rely on critical apps for processes, such as manufacturing, that also need to be operational at all times. What the cloud brings here is guaranteed high availability of 100% uptime. As cloud servers are virtual, instances can be moved between hardware and this means that downtime due to server failure becomes a thing of the past for cloud users. Indeed, even if an entire datacentre goes offline because of a natural disaster, service can be maintained by moving the instances to a datacentre in another geographical location.

5. Security and compliance

Meeting IT Security and Compliance Requirements with GoAnywhere MFT

Security and compliance are a high priority for all companies and are often a major challenge to those with in-house systems that lack both the budget and expertise to put effective measures into place.

The cloud can play a significant role in improving both security and compliance. Service providers employ highly skilled security experts and deploy advanced tools to protect their customer’s systems and data and to comply with their own stringent regulations. This ensures cloud users operate in highly secure environments, protected by next-gen firewalls with intrusion prevention systems and in-flow virus protection that detect and isolate threats before they reach a client’s server.

6. Built-in technology upgrades

6 Ways to Upgrade Your Business Technology | Startup Grind

Keeping up with the Joneses as far as technology is concerned is always a challenge for organisations, not simply for the cost of regularly purchasing newer hardware, but also the effort of migrating applications and data during the process.

By adopting cloud technology, companies no longer have this issue. Service providers regularly update their hardware in order to remain competitive themselves and this ensures that their customers benefit from always having the latest technology, such as Xeon processors and SSD hard drives, at their disposal. What’s more, virtualisation means any migration to new hardware takes place unnoticed.

7. Collaboration and remote working

25 Top Collaboration Tools for Remote Team Management - Blog - Shift

Digital transformation involves the replacing of outdated working practices and legacy systems with those that support innovation and agility. The cloud is the ideal environment for this, providing both the ability for remote working and improved collaboration. Many cloud-based platforms have been developed with collaboration in mind, offering video conferencing, file sharing, syncing and project management tools for teams to use in and out of the office. Files are instantly updated and are available anywhere with a connection; privileges and authentication can be determined for every employee, and projects, people and progress can be monitored and tracked.

Conclusion

Digital transformation is fast becoming a necessity for organisations, providing the means to help them be more agile, innovative, cost-effective and competitive while being better able to meet the needs of their customers. Cloud technology is instrumental in bringing this about as it offers the ideal environment in which to deploy the technologies and undertake the workloads on which digital transformation depends.

The platform to focus on the most valuable asset: Data-Centric Architecture.

Building a Data-Centric Architecture to Power Digital Business | Pure Storage Blog

The value proposition of global systems integrators (GSIs) has changed remarkably in the last 10 years. By 2010, it was the waning days of the so-called “your mess for less” (YMFL) business model. GSIs would essentially purchase and run a company’s IT shop and deliver value through right-shoring (moving labor to low cost places), leveraging supply chain economies of scale and, to a lesser degree, automation.

This model had been delivering value to the industry since the ‘90s but was nearing its asymptotic conclusion. To continue achieving the cost savings and value improvements that customers were demanding, GSIs had to add to their repertoire. They had to define, understand, engage and deliver in the digital transformation business. Today, I am focusing on the value GSIs offer by concentrating on their client’s data, rather than being fixated on the boxes or cloud where data resides.

In the YMFL business, the GSIs could zero in on the cheapest, performance compliant disk or cloud to house sets of applications, logs, analytics and backup data. The data sets were created and used by and for their corresponding purpose. Often, they were tenuously managed by sophisticated middleware and applications for other purposes, like decision support or analytics.

Getting a centralized view of the customer was difficult, if not impossible. First, it was due to the stove piping of the relevant data in an application-centric architecture. In tandem, data islands were created for analytics repositories.

Data-Centered Architecture - Design Your Software Architecture Using Industry-Standard Patterns - OpenClassrooms

Now enters the “Data Centric Architecture.” Transformation to a data-centric view is a new opportunity for GSIs to remain relevant and add value to customer’s infrastructures. It is a layer deeper than moving to cloud or migrating to the latest, faster, smaller boxes.

A great way to help jump start this transformation is by rolling out Data as a Service offerings. Rather than taking the more traditional Storage as a Service or Backup as a Service approach, Data as a Service anticipates and provides the underlying architecture to support a data-centric strategy.

It is first and foremost a repository for collected and aggregated data that is independent of application sources. From this repository, you can draw correlations, statistics, visualizations and advanced analytical insights that are impossible when dealing with islands of data managed independently.

It is more than the repository of the algorithmically derived data lake. A Data as a Service approach provides cost effective accessibility, performance, security and resilience – aimed at addressing the largest source of both complexity and cost in the landscape.

What Is Data-as-a-Service (DaaS)? | Hazelcast

Data as a Service helps achieve these goals by minimizing, simplifying and reducing the data and its movement within and outside of the enterprise and cloud environments. This is achieved around four primary use cases, which range from enterprise storage to backup and long-term retention:

 

Each of the cases illustrates the underlying capabilities necessary to cost effectively support the move to a data-centic architecture. Combined with a “never migrate or refresh again” evergreen approach, GSIs can focus on maximizing value in the stack of offerings. This approach is revolutionary.  In past, there was merely a focus on the refresh of aging boxes, or the specifications of a particular cloud service, or the infrastructure supporting a particular application. Today, GSIs can focus on the treasured asset in their customer’s IT — their data.

How to Turn Challenges into Opportunities with Operations Planning and Integrated Sales (OP&IS)

Five Ways Sales And Operations Planning Enables Success And Drives Business Integration

Supply chain and operations planning (S&OP) is a critical supply chain planning process through which various teams agree on a fundamental governing plan for the next weeks and months, which then forms the basis of all the detailed planning and execution.

It is a cross-functional responsibility in which various departments, such as sales, marketing, logistics, manufacturing, finance, and operations, contribute to the critical decision-making process. Often, there are conflicts between the preferences and priorities of different business units.

So, how to meet the different expectations of supply and demand?

Through a clearly defined S&OP process, you can improve overall service levels while adjusting your company’s goals and plans. But what’s stopping you from sketching out your S&OP process? Is there no comprehensive and systematic involvement between your departments?

Integrated Sales and Operations Planning: How to Convert Challenges into Opportunities with IS&OP?

S&OP Sales Operations Planning During and Post Pandemic Like Covid

When marketing a new product, you can make assumptions about revenue or profit. One of its prerequisites is to provide the right products to the right customers at the right time, which can be achieved through correct predictions.

But what if it is incorrect?

Costs will soar, sales and profits will decrease. It’s that simple.

Over-forecasting will lead to excess inventory and lower profits. Under-forecasting will lead to lost sales and customer dissatisfaction.

How to holistically integrate all the supply chain activities (supply planning, demand planning & forecasting, operations, logistics) while addressing suppliers, markets, and investors’ complex ecosystem?

Road to Success – Integrated Sales and Operations Planning (IS&OP)

“IS&OP is a platform to drive consensus between demand & supply and create & monitor the execution plans.”

Integrated Business Planning Sales And Operations Planning Trade Promotion Management, PNG, 857x299px, Integrated Business Planning, Brand,

Uncertainty in demand, supply, or both leads to insufficient service levels, increased inventory & logistics costs, and dissatisfaction among stakeholders and customers. But, measurable management of this uncertainty through correct planning decisions can bring significant benefits.

Post-COVID, the market is volatile, and companies worldwide suffer disruptions in maintaining the demand-supply equilibrium. The macro-environment challenges and evolving trends (raw material scarcity, customer behavior changes, etc.) have increased the need for supply chain’s agility. In the next five years, the supply chain analytics market will grow by 17%. Therefore, as a demand planner, it is time to set up a broader framework and adopt advanced solutions to solve the current two key challenges in the supply chain, i.e., reduce costs and improve service levels.

If you place your bets correctly by implementing a reliable S&OP solution, you can:

  1. Speed up the operations & logistics process
  2. Address the issues related to downstream inventory & production planning, sales loss, stock-outs, inaccurate resourcing, low service levels, higher logistics cost, and more.

The key to a productive sales and operations planning process is understanding all decisions’ impact in real-time.

With advanced supply chain analytics solutions, you can reach a consensus between various demand plans and demand & supply factors. Integrated Sales and Operations Planning (IS&OP) does precisely that. Check out this IS&OP video where Shashikiran discusses how IS&OP balances supply, demand, finance, and procurement while ensuring that the plan is always consistent.

After years of observing the S&OP process in enclosed quarters, we have created an Integrated Sales and Operations Planning solution to bridge the gaps that many supply-chain leaders face.

This solution works in three different modules.

1.) Demand Consensus

Demand planning in VUCA world – is consensus based approach correct?

“Demand consensus is a multi-stage process to arrive at one planning number that every stakeholder agrees on.”

Often demand planners spend half of their time (or more) accessing data, communicating with other teams, and tallying each other’s planning base. With value created through S&OP, you can integrate future baseline demand with sales & marketing activities and achieve the desired top-line & bottom-line objectives, to make up for the lost time.

Forecast that relied on hunch or legacy systems can have a profound negative impact on demand realization and supply chain costs. Therefore, it makes sense to start the demand planning journey by establishing base forecasting capabilities to build confidence in the quality of data-based forecasts and demand & supply plans (based on that forecast). There are two ways to do this. One, you can hire a statistician to make a good baseline forecast. The other is to replace the individual with a solution that comes with an embedded demand consensus module.

Let’s see the difference between the two.

1.) Manual consensus (based on statistician’s created baseline forecasts)

  1. Statistician will prepare a mathematical model that approximately mirrors the trend by testing various baselines and drilling down to one that closely represents the reality
  2. Next, you must tune the model for incorporating seasonality – the time of the month effect/ day of the week effect, etc.
  3. Then, use the available historical data to test the model and improve it until it provides a reliable result
  4. Add data and use the model to predict future trends
  5. Finally, share it with the concerned stakeholders (sales, marketing, logistics, finance, and operations).

However, there is one caveat in this model.

When all the function units gather to discuss forecasts, share plans, report and consider changes, and agree on the final demand plan, a lack of collaboration can be damaging. Besides, organizations with multiple SKUs, distribution centres, etc. may require dozens of such baseline.

Only a smart collaboration process can address these concerns in a scalable way, which has been explained in the second method.

2.) Automated demand consensus module (built in the IS&OP solution)

Here is how the demand consensus module facilitates the business units to arrive at a consensus and collaboration:

  1. Using the module, you can combine data from numerous supply chain activities and arrive at a forecast that every stakeholder can accept. The module will provide you access to various top-down (demographics & target) and bottom-up (operating expense minus depreciation, capital expenditure) forecasts, considering the merchandising, sales & marketing, and operations teams’ concerns. You can then analyze the deviations between the various forecasts and then smooth & integrate them. And in case you need a baseline for new products, you can use comparable data from other products.
  2. You can introduce pricing interventions and promotions strategies to arrive at a demand plan. The key is to make all stakeholders involved in the S&OP process reach a consensus on demand.

2.) Demand-Supply Consensus

352 Supply And Demand Illustrations & Clip Art - iStock

“One of the supply chain’s main pain points –misalignment between the demand-side dynamics and supply-side dynamics.”

This module can divide the demand plan proposed in the first module into various supply-side requirements. The requirements can come from multiple resources, e.g., personnel and operators, materials & inventory, warehouses & other operating infrastructure, or transportation assets such as trucks. Study what kind of supply is needed to meet the demand. Then analyze gaps and arrive at an alignment.

The alignment takes one of the following three steps.

  1. Smoothening the demand to meet the supply
  2. Augmenting/pruning the supply (if different from the demand)
  3. Or, in a few cases, pruning the demand to meet the constraints

The idea is to drive consensus. Once it happens, you can freeze the plan and proceed towards its execution.

3.) Execution Monitoring

Executing, Monitoring, and Controlling - AITS

“Reliance on the supply side leads to prosperity on the demand side.”

You can make precise predictions based on the first module (demand consensus) and create a scalable infrastructure using the second module (demand-supply consensus). With the execution monitoring module, you can add and execute functions using automated processes.

Creating a single source of truth

If you or your stakeholders are currently not able to take advantage of supply and demand decisions, or cannot rely on the baseline, run this module to incorporate advanced analytics to catch on the supply and demand scenarios. The module will help build trust and improve collaboration between stakeholders. This way, you will be able to align your organization in one direction.

If executed correctly, demand will reflect sales potential and lead to optimal inventory levels and logistics support.

There are two equally critical functions in this module.

  • You can monitor and compare the deviation between real-time demand and planned demand. If the difference is significant, you can shape the demand back to the plan or take pre-emptive measures on its execution to control the costs.
  • You can also determine whether the execution has deviated from the plan because of the nonfulfillment of standard operating procedures or some unforeseen factors.

The idea here is to generate early alerts to bring execution back to the plan. Through the three modules elaborated above, you can address your supply chain and operation domain’s long-standing pain points.

The IS&OP solution that Anteelo offers can help you boost your customers’ experience, deliver the highest quality products, build advanced forecasting capabilities, and mitigate the concerns of all your business units by fine-tuning each link in the supply chain.

Order Cancellation Prediction: How a Machine Learning Solution Saved Thousands of Driver Hours

Artificial Intelligence and Machine Learning Solution - YouTube

‘Efficiency’ roots from processes, solutions, and people. It is one of the main driving forces leading to significant changes in the way companies work in the first decade of the 21st century. The following decennary further accelerated this dynamic. Now, post-COVID, it is vital for us to become efficient, productive, and environmentally friendly.

One of our clients manufactures and sells precast concrete solutions that improve their customers’ building efficiency, reduce costs, increase productivity on construction sites, and reduce carbon footprints. They provide higher quality, consistency, and reliability while maintaining excellent mechanical properties to meet customers’ most stringent requirements. The customers rely on their quality service and punctual delivery to receive products. This is possible because their supply chain model is simple. They prepare the order by date, call the driver the day before, and load the concrete the next morning. The driver delivers the exact specific product to the specified address.

However, a large percentage of customers cancel orders. One of the main reasons for the cancellation is the weather.

The client turned to Anteelo to provide an analytical solution for flagging such orders so that their employees do not have to prepare for such deliveries.

I’ll abridge the journey so far that it led to the creation of a promising solution.

How it all started?

One of the business units of the client suffered huge operational losses due to the cancellation of orders. Although the causes were(are) beyond their control, they always had(have) to compensate truck driver and concrete workers. To improve the demand and supply planning process’s efficiency, they had to encounter order cancellation risks. Though they might have increased their resource capacity by adding more people or working in shifts, this option may not have paved well in the long run. Apart from this, the risks may not have mitigated as anticipated, which might have further reduced the RoI.

Although they put forward various innovative ideas, the results did not reflect the expectations, resulting in the loss of thousands of drivers’ hours. Before deciding to use an analytical solution, they discovered that their existing system has two main shortcomings.

  • Extensive reliance on conventional methods for dispatch
  • Absence of a data-driven approach

Thus, they wanted to leverage a powerful ML-enabled solution to empower ‘order dispatching’ to effectively get ahead of order cancellation and minimize high labor costs.

Roadmap that led to the solution’s development

POC vs Prototype vs MVP: Which Strategy to Prefer?

The analytics team from Anteelo pitched the idea of developing a pilot solution and executing it in the decided test market and then creating a full-blown working solution.

We used retrospective data in the sterile concept (the idea was to solve as many challenges as possible for POC (Proof of Concept)). Later, when the field team gave positive feedback, we planned to deploy a cloud-based working model with a real-time front-end. Next, measure its benefits in terms of hours saved in the next 12 to 24 months.

Proof of Concept (POC)

From idea to the Proof (of Concept) - Cybercom

To reap the maximum benefits and minimize risks on the analytical initiative, we opted to start with the proof of concept (POC) and execute a lightweight version of the ML tool. We developed a predictive model to flag orders at risk of cancellation and simulated operational savings based on the weather and previous years’ data. We found that:

  1. 50% of orders were canceled each year
  2. A staggering percentage of orders were canceled after a specific time the day before the scheduled delivery – ‘Last-minute cancellations.’
  3. Because of these last-minute cancellations, hundreds of thousands of driving hours were lost

Creating the Most Viable Product (MVP)

Minimum Viable Product "MVP": What is it and how does it help your strategy?

Before we could go any further or zero down to the solution deployment, we had to understand the cancellation’s levers. And once the POC was ready, we decided to evaluate the results based on the baselines and expectations and compare them with the original goals. Next, we decided to proceed with the pilot test and modify the solution based on its result. Therefore, we selected a location and deployed some field representatives to provide real-time feedback and relied on our research for this purpose. The results (savings potential) were as follows:

  1. Fewer large orders canceled
  2. More orders canceled on Monday
  3. When the temperature dropped to certain degrees, the number of cancellations increased
  4. More than half of the last-minute cancellations were from the same customers
  5. If a certain proportion of the orders were canceled at least one day in advance, the remaining orders were canceled at the last minute
  6. On days with rain, the number of cancellations increased

Overall, order quantity, project, and customer behavior were the essential variables.

The MVP stage provided a staggering number, representing the associated monetary loss (in millions) due to the last-minute cancellations. The reasons behind such a grim figure were the lack of a data-oriented approach and prioritization method.

The deployed MVP helped reduce the idle hours. It helped flag the cancellations that we usually would have missed with our heuristic model. It also provided the market-wise potential, which we ultimately decided to roll out.

Significant findings (and refinements) in the ML model based on pilot test

Labor planning is a holistic process

An effective labor plan must deliberate factors other than the quantity (orders), such as the distribution of orders throughout the day, the value of the relationship with customers, and so on.

Therefore, the model output was modified to predict the quantity based on the hourly forecast.

Order quantity may vary with resource plan

‘Order quantity’ shows a considerable variation between the forward order book and the tickets, making it impossible to use it as a predictor variable.

Resources are reasonably fixed during the day

This contradicts one of the POC’s assumptions that resources will be concentrated in the market on a given day. This has led to corresponding changes in forecast reports, accuracy calculations, etc.

Building and Deploying a Full-blown ML-model at Scale

How to Develop an End-to-End Machine Learning Project and Deploy it to Heroku with

At this stage, we had the cancelation metrics, levers that worked, and exact variables to use in the solution. Now, the team has enough data to build an end-to-end solution comprising intuitive UI screens & functions, automated data flows, and model runs. And finally, measure the impact in monetary equivalent.

Benefits’ (Impact) Measurement

To turn the wheel and get it on track, we have to extract the model’s maximum value and evaluate it over time. We decided on two evaluation time metrics for measuring the impact.

  1. Year-on-Year
  2. Month-on-Month

The following is a summary table of improvements to key operational KPIs. Based on TPD change, the estimated savings are calculated based on the annual business volume.

TPD Location-specific US
Metric value (YoY) 30% (up) >$350k >$3M
Metric value (MoM) 12% (up) >$150k >$3M

*data is speculative and based on the pilot run.

Predictive Model’s Key Features

  1. Visual Insights
  2. Weekly Model Refresh
  3. Modular Architecture for seamless maintenance

Results

  1. Reduced Deadheading
  2. Streamlined dispatch planning
  3. Higher Labor Utilization
  4. Greater Revenue Capture

Why should you consider Anteelo’s ML/AI solutions?

We have successfully tested the pilot solution, and the model has shown annual savings of more than $3 million. Now, we will build and deploy the full version of the model.

Anteelo is one of the top analytics and data engineering companies in the US and APAC regions. If you need to make multi-faceted changes in your business operations, let us understand your top-of-mind concerns and help you with our unique analytics services. Reach out to us at https://anteelo.com/contact/.

Don’t let your data backup services go bankrupt like a wheel of fortune.

Why does bankruptcy seem to come in long strings on Wheel of Fortune? - Quora

Data backup is one of those daily tasks that resembles Wheel of Fortune. If a backup fails occasionally or you forget to swap media once in a while, the odds are good that the spinner on your wheel of fortune won’t cost you anything. Until the day it settles on “bankrupt,” and all those occasional backup glitches will come back to haunt you.Piecing together transactional data is a major hassle. But the value of lost data goes way beyond that now. Analytics are making fast inroads into every part of the value chain. As they do, the value of a company’s non-transactional data grows. All that info you’ve been using to serve customers more effectively, operate more efficiently and develop innovative new products—gone. Losing that kind of data is like burning stacks of cash. When it’s gone, you can’t get it back. That can seriously complicate your day. Trying to decide how much backup capacity you need isn’t completely straightforward either. It’s a wasted effort if you keep too little and miss something important, so many companies tend to err on the side of caution. And they err more than they realize. When we ask clients about their backup capacity, many estimate they’re using 80% or more of their capacity. When we survey their actual consumption, utilization rates average around 54% of their storage footprint. The other half sits idle.

There’s a better way to do this. Instead of guessing at what you need, spending more than you should, and having to maintain a vigil to insure it’s working, take a look at the compelling BackUp as a Service (BUaaS) offerings that are becoming more prevalent. When you harness the power of virtual infrastructure, you subtract many of the issues that make backup a hassle and you get a more reliable service that you don’t need to think about. Here are four benefits of BUaaS that deserve consideration:

* BUaaS always offers the right capacity. Companies routinely overestimate their backup capacity needs because budget approval happens only periodically. Procurement can take six months or more so, when you budget for backup capacity, you make sure you have more than enough. With BUaaS, you don’t need to sweat that. Capacity can be added or subtracted as needed, so you never have too much or too little.

Backup as a Service - Architecting IT

* It’s always up to date. The problem with dedicated backup infrastructure isn’t just the money you have parked in a rack. Buying backup means you’ve bought into a level of performance and features for the duration of time you own the hardware. If your needs change, you’re effectively held hostage to a decision you made earlier. Because BUaaS is highly virtualized, it experiences ongoing improvement as both the infrastructure is refreshed and as new versions of the backup service code are released.

Always-up-to-date software for Logistic Service Providers |

* It’s more flexible. Backup as a Service allows you to dial up compressions and deduplication if you need to expand storage, or adjust for more speed if you need higher performance. You don’t need to change hardware, just settings. And, if your needs change, adjustments are just a mouse click away.

30 Companies Switching to Long-Term Remote Work | FlexJobs

* You get more expertise as part of the bundle. While the advantages might not be readily apparent, the additional staffing and add-on services included in BUaaS offerings make the service more reliable and less expensive. The growing intelligence of BUaaS solutions helps separate minor issues from those that can truly affect the quality of your backups. Automation enables the provider to offer services with fewer people that are scalable, predictable and less expensive than maintaining the same capacity in a fixed physical environment.

15 Key Skills You Can Gain from Work Experience

Rethinking the banking value chain is a call to action.

Publicsectorbankappointments: Reshuffle at Public Sector Banks; 14 GM and CGM Becomes Executive Directors, BFSI News, ET BFSI

Financial services is shifting to platforms for business functions and processes, and that’s a good thing. Moving from applications to Software as a Service (SaaS) and then to Platform as a Service (PaaS) can create new value chains. It can also dramatically reduce the number of error-prone manual processes and foster industry collaboration for superior efficiencies.

Leverage open APIs and core banking systems

Adopting Open Banking APIs Improves Customer Experience | Nordic APIs |

But financial services organizations can move even further — and to stay competitive, they’ll need to. Open APIs can help them combine bank data with third-party data and services to create innovative capabilities, essentially “hiring” third parties to provide these services. Banks can also provide best-of-breed capabilities as services to others.

As part of this shift, core financial systems and capabilities can become “consumable” via API-driven interfaces, creating specific outcomes. These core systems, such as payments and mobile wallets, essentially become services that both a bank and its third-party providers can consume.

Conversely, services from third-party providers can be integrated into banks’ own platforms. This may sound daring, but some tech giants — Facebook and Amazon among them — already do this, building new capabilities with APIs that can integrate and interact with capabilities provided by third-party providers. Banks can do it, too.

Partner with providers

Become a CookiePro Managed Service Provider (MSP) Partner

Providers can also become partners. Some banks have invested in FinTechs, adopting an attitude of “If you can’t beat them, join them.” This should facilitate the development of important new services, including “know your customer” (KYC) and new accounts. A single bank can essentially stitch together a passel of services, then present them to customers under a single bank brand.

KYC: 3 steps to effective Know Your Customer compliance

This reassessment of the value chain can free banking and capital markets organizations from the need to provide all services end-to-end. Instead, they can add open APIs that allow trusted third parties to provide various microservices.

The right platform can help banks grow through mergers and acquisitions, making it far easier to integrate disparate systems. This same feature can make it easier for banks to integrate the systems of partners too.

At the end of the digital transformation journey, financial services providers will enjoy a new position in their reconstituted ecosystem. They’ll fully understand their position in that value chain, their competitive advantage and areas of specialization, and their need to partner with third parties.

5 Key Features to Look For in Developer Hosting Solutions

8 Best Web Hosting Services for Developers [2021 UPDATE]

When it comes to finding a hosting solution, developers have specific needs. Whether they are developing websites or applications, they’ll need a hosting solution that provides them with all the resources, features and control required to undertake their development projects and the storage space to keep all the projects they have worked on. Here, we’ll look at five important features that developers should look for in their hosting.

1. Putting resources in place

Why now is a great time to optimise your customer service – Part 3: Putting the right resources in place | Enghouse Interactive Eptica – Multi-channel communications – Self-Service, Voice, Email, Chat, Social Networks

Although each project will differ in its size, scope and complexity, developers will need hosting with the capacity to let them carry out their work unimpeded. This means finding a solution that provides all the server resources you need, including CPU, RAM, storage and bandwidth. You’ll also be looking for exceptional performance from your hosting as it can speed up development time for you and your client, as well as improving how well the application or website performs when you show it to your client during development. Reliability is also key, so look for a hosting that provides a minimum of 99.9% uptime.

Ideally, therefore, a developer needs to shy away from shared hosting and adopt a more powerful solution, such as VPS, dedicated server or for those developing cloud-based applications, cloud hosting. To ensure optimal performance, look for hosting that includes the latest Intel Xeon processors and SSD storage which can significantly boost speed.

You should also bear in mind that you may need to scale up resources beyond what your current package provides. Should there be a need to do so, you’ll want this to be as quick, simple and undisruptive as possible. While the cloud offers unrivalled and instant scalability at the click of a button, if you choose VPS or dedicated server hosting, you need to make sure that your provider allows you to upgrade easily.

2. Putting you in control

Making Life Easier By Putting You in Control - Take control with GMT

As a developer, the hosting solution you choose must give you the flexibility to work on any type of project. This starts with having control over the choice of the operating system. Not only do you need a choice between Linux or Windows; you’ll also want to choose from the range of these systems to find the one which best suits the application you are developing.

Furthermore, you’ll also need hosting that supports and provides easy integration for the programming languages or frameworks that you need to work with. Solutions that provide 1-click installations for the key software and frameworks you intend to use don’t just save you the time and effort of a manual install, they increase the pace of the development too.

3. Getting to the root of things

Getting to the Root of Things

Having root access is also vital for developers as it gives them complete control over their server. This gives you the freedom to configure the server in the most appropriate way for your projects and enables you to install and configure applications, run multiple websites and carry out various other important tasks.

4. Security built-in

Advanced security built in as standard - myairops

Cybercrime continues to be a major headache for the IT community and developers need to ensure that the applications and websites they are developing are secure. The last thing you need is to have your client’s intellectual property and data stolen from your development server or to hand them over an application that has been stealthily infected with malware. Neither do you want the project having to go back to square one because of infection, corruption or ransomware or stalling because of a DDoS attack.

For this reason, choose a host that provides robust protection, including custom firewall rules, intrusion prevention, anti-DDoS, anti-malware, VPM and application security, to ensure your server is always protected.

For peace of mind and quick, easy restoration, a backup solution is essential for any developer.

5. Expert technical support

Premium Vector | Technical support, customer service staff work

24/7 expert technical support is critical to developers whose projects may have them working with a range of different setups. Having an expert on tap to help you with any issues, regardless of the time of day, can provide indispensable assistance when you need it most. Make sure any hosting solution you choose has this included.

Conclusion  As a developer, you’ll want to provide your clients with a first-class service and to do this, you’ll need first-class service from your hosting provider. You’ll require a high-performance solution that provides you with all the resources to carry out your projects; you’ll want the freedom to deploy the operating system of your choice and have full control over your server; you need the ability to use the programming languages, frameworks and software that the job demands as well as an

MLOps: Is This the Only Way to Eat an Elephant?

MLOps - Machine Learning Operations

Managing ML production requires a combination of data scientists (algorithm procrastinators) and operations (data architects, product owners? Yes, why not?).

Operationalizing ML solutions in on-prem or cloud environments is a challenge for the entire industry. Enterprise customers usually have a long and random software update cycle, usually once or twice a year. Therefore, it is impractical to couple the deployment of the ML model with irregular update cycles. Besides, data scientists have to deal with:

  • Data governance
  • Model serving & deployment
  • System performance drifts
  • Picking model features
  • ML model training pipeline
  • Setting the performance threshold
  • Explainability

And data architects have enough databases and systems to develop, install, configure, analyze, test, maintain… the verb would keep on accumulating, depending on the ratio of the company’s size to the number of data architects.

This is where MLOps come in to rescue the team, solution, and the enterprise!

What is MLOps?

AIMLOps practices and its benefits | by Taras Tymoshchuck | DataDrivenInvestor

MLOps is a new coinage, and the ML community keeps on adding/ perfecting its definition (as the ML life cycle continues to evolve, its understanding is also evolving). In layman terminology, it is a set of practices/disciplines to standardize & streamline ML models in production.

It all started when a data scientist shared his plight with a DevOps engineer. Even the engineer was unhappy with the incumbent (inclusion of) data and models in the development life cycle. In cahoots, they decided to amalgamate the practices and philosophies of DevOps and ML. Lo and behold! MLOps came into existence. This may not be entirely true, but you have to give credits to the growing community of ML & DevOps personnel.
Five years ago, in 2015, a research paper highlighted the shortcomings of traditional ML systems (third reference on this Wikipedia page). Even then, the ML implementation grew exponentially. After three years of the research’s publication, MLOps became mainstream – 11 years after DevOps! Yes, it took this long to combine the two. The reason is simple – AI became mainstream only a few years back, 2016, 2018, or 2019 (the year is debatable).

MLOps Lifecycle

MLOps brings the DevOps principles to your ML workflow. It allows continuous integration into data science workflows, automates code creation and testing, helps create repeatable training pipelines, and then provides continuous deployment workflow to automate the package, model validation, and deployment to the target server. It then monitors the pipeline, infrastructure, model performance, and new data and creates a data feedback flow to restart the pipeline.

MLOps Explained

These practice involving data engineers, data scientists, and ML engineers enables the retraining of models.

All seems hunky-dory at this stage; however, in my numerous encounters with the enterprise customers, and after going through several use cases, I have seen MLOps, although evolutionary & state-of-the-art, failing several times in delivering the expected result or RoI. The foremost reason, often discovered, because of –

  • The singular, unmotivated performance monitoring approach
  • Unavailability of KPIs to set/measure the performance
  • And lack of threshold to raising model degradation alerts

In contrast, these are the technical hindsight that is often vindictive because of the lack of MLOps standardization; However, a few business factors, such as lack of discipline, understanding, resources, can slog or disrupt your entire ML operations.

error: Content is protected !!