Techniques of making Cloud Application Migrations simpler

4 Steps to Successful Specialization in Your Financial Services Business - Financial Advisor Coaching by Susan DanzigMany system integrators and cloud hosting providers claim to have their own unique process for cloud migrations. The reality is everyone’s migration process is just about the same, but that doesn’t mean they’re all created equal. What really matters is in the details of how the work gets done. Application transformation and migration requires skilled people, an understanding of your business and applications, deep experience in performing migrations, automated processes and specialized tools. Each of these capabilities can directly impact costs, timelines and ultimately the success of the migrations. Here are four ways to help ensure a fast and uneventful move:

1: Create a solid readiness roadmap.

Building a Roadmap to Cloud: 5 Steps You Need to Follow | Stefanini

The cloud readiness stage is often referred to as the business case and/or involve a total cost of ownership assessment, rapid discovery, advisory services, and more. However, most of these approaches fall short of the up-front analysis really needed to decide which applications to move, refactor or replace. When you embark on a migration to cloud, job one is to ensure you are ready to go. Diving right into a migration without a roadmap is typically a recipe for failure. If you don’t know what the business case is to migrate, then stop right there. You should come out of the readiness phase with a high-level plan and an initial set of migration sprints and a detailed roadmap on how to address the subsequent sprints.

2: Design a detailed migration plan.

7 Steps to Include in your Data Migration Plan | Secure Cloud Backup Software | Nordic Backup

This phase, also referred to as deep discovery, will help keep migration activities on schedule. All too often, lack of proper analysis and design leads to surprises in the migration phase that can have a domino effect across multiple applications. This phase should include a detailed analysis of the scope of applications and workloads to be migrated, sprint by sprint. This agile process shortens migration time and ensures faster time to value in the cloud. All unknowns are uncovered, and changes are incorporated in the roadmap to ensure anticipated business objectives will be achieved. 

3: Set a strict cadence — and stick to it.

How to Create an Outreach Cadence in 6 Easy Steps - Odro

Often, most of the planning effort focuses on getting past that first migration sprint. This phase requires a great deal of planning and preparation across multiple stakeholders to keep the project on time and within budget. Scheduling all of the resources and tracking the tasks that lead up to D-day is no simple job. These activities must follow a standard cadence of planning steps to ensure effective and efficient migrations.

4: Automate as much of the migration as possible.

IT Automation vs Orchestration: What's The Difference? – BMC Software | Blogs

The way your organization performs the migration can also impact the project. In most cases, tools that automate the application transformation process, as well as the migration to cloud, can mean a huge savings in time and money. Unfortunately, there’s no off-the-shelf automation tool to ensure success. The migration phase requires people experienced in a range of tools and how to orchestrate the work.

One could argue that a fifth way to ease migration headaches is optimizing applications during the process. However, there are different schools of thought on whether to optimize applications and workloads before, during or after migration. I can’t say there is a definitive right or wrong answer, but most organizations prefer to optimize after migration. This allows for continuous tuning and optimization of the application moving forward.

It is important to note that steps in the migration process are not one-and-done activities. The best migration and transformation services are iterative and continuous.

You can’t determine readiness only once. The readiness plan needs to be reviewed on an ongoing basis. Don’t analyze and design migrations just once. Do it sprint by sprint, and plan each migration activity. Make sure you continuously collect and refresh data, refine and prioritize the backlog, and look for key lessons learned to ensure continuous improvement through the lifecycle of entire migration. Experience and planning can go a long way toward easing your next application migration.

Cloud-automated IT eliminates shorcomings of Traditional IT model

Cloud Computing Risks, Challenges & Problems Businesses Are Facing

For small and medium-sized companies, the traditional IT model of an on-premises network built and operated with an in-house IT staff is often fraught with problems.  IT teams are stretched to capacity simply keeping the lights on and there are always going to be issues with systems performance, network reliability, end user productivity, data recovery, security and the help desk.  Staffing is another constant challenge. And if companies decide to outsource some IT functions, they end up with multiple service providers, which creates additional headaches.

The problem is highlighted in the 2018 State of the CIO survey conducted by CIO Magazine, in which CIOs said their role is becoming more digital and innovation focused, but they don’t have time to address those pressing needs because they are bogged down with functional duties.

The common business challenges reported in the survey are often symptoms of needs that are misaligned with technologies and service providers, plus the lack of budget to address the problems.  Those challenges typically revolve around:

  • security and disaster recovery
  • lost productivity due to downtime and end users waiting for support
  • shadow IT and users not following security training
  • escalating or unpredictable costs
  • outdated infrastructure
  • shortage of talent
  • multiple tech providers and systems working at cross-purposes

Smaller businesses that may have turned to traditional managed service providers quickly discover these services lack seamlessness and result in reactive support challenges. According to the Ponemon Institute’s State of Endpoint Security Risk Report, 80% of these reactive issues can be eliminated through standardization and automation.

Moving to a cloud-based IT infrastructure

Why Your Business Should Be Adopting Cloud-Based IT Services - Consolidated Technologies, Inc.

Cloud-automated IT services can address these challenges and free up IT teams so they can align their strategies with the business.  Cloud-automated IT does all this by providing seamless, proactive support, enterprise-level security and compliance, redundant systems, highly available virtual desktops, continuous upgrades, predictable costs and improved end user productivity. The cloud-based IT infrastructure is managed by a team of experienced, knowledgeable, highly trained professionals who apply industry best practices.

Aligning the technologies in the IT infrastructure and having experts managing them delivers several key benefits:

  • Cost predictability: Not only are ongoing costs for an IT infrastructure that supports the business predictable and easy to budget for, surprise costs for disasters like outages, ransomware and security breaches are essentially eliminated.

Cyberlink - Reduce Variable Costs with Predictable Pricing Icon [9.27.17]-01 - CyberlinkASP

  • Reduced Friction and Risk: By standardizing on best-in-class, constantly updated technologies, friction is reduced between equipment, vendors, integration points, and service providers. This results in stronger security, performance, and ease of use.

Reduce Operational Risk with Effective Workflow Management

  • User Satisfaction and Productivity: If you’ve ever muddled through a few days of work on a loaner laptop you understand how important the user experience is to employee satisfaction and productivity. VDI and proactive management deliver a better computing experience and increased productivity. Instances can be refreshed just like you would with a cell phone.

60+ Customer Satisfaction Survey Questions You Can Borrow

With cloud-automated IT, the entire IT infrastructure moves to the cloud in a well-planned, efficient manner. The service provider helps create a personalized upgrade plan based on the specific needs of the organization, does a health check on existing applications, makes sure everything is configured properly and updates applications to the latest releases.

And cloud-automated IT goes far beyond simply lifting and shifting existing systems to the cloud. It is a holistic approach that aligns cutting-edge technologies and fully managed services to provide smaller companies enterprise-grade IT that can make a significant impact on the security and effectiveness of the business.

When should you abandon your ‘lift and shift’ cloud migration strategy?

How Can Organizations Make Best Use of Lift and Shift Cloud Migration?

The easy approach to transitioning applications to the cloud is the simple “lift and shift” method, in which existing applications are simply migrated, as is, to a cloud-based infrastructure. And in some cases, this is a practical first step in a cloud journey. But in many cases, the smarter approach is to re-write and re-envision applications in order to take full advantage of the benefits of the cloud.

By rebuilding applications specifically for the cloud, companies can achieve dramatic results in terms of cost efficiency, improved performance and better availability. On top of that, re-envisioning applications enables companies to take advantage of the best technologies inherent in the cloud, like serverless architectures, and allows the company to tie application data into business intelligence systems powered by machine learning and AI.

Of course, not all applications can move to the cloud for a variety of regulatory, security and business process reasons. And not all applications that can be moved should be re-written because the process does require a cost and time commitment. The decision on which specific applications to re-platform and which to re-envision is a complex risk/benefit calculation that must be made on an application-by-application basis, but there are some general guidelines that companies should follow in their decision-making process.

What you need to consider

Lift and Shift Cloud Migration Strategy

Before making any moves, companies need to conduct a basic inventory of their application portfolio.  This includes identifying regulatory and compliance issues, as well as downstream dependencies to map out and understand how applications tie into each other in a business process or workflow. Another important task is to assess the application code and the platform the application runs on to determine how extensive a re-write is required, and the readiness and ability of the DevOps team to accomplish the task.

The next step is to prioritize applications by their importance to the business. In order to get the most bang for the buck, companies should focus on applications that have biggest business impact. For most companies, the priority has shifted from internal systems to customer-facing applications that might have special requirements, such as the ability to scale rapidly and accommodate seasonal demands, or the need to be ‘always available’. Many companies are finding their revenue generating applications were not built to handle these demands, so those should rise to the top of the list.

Re-platform vs. re-envision

Application Migration Strategies: Rehost vs Replatform vs Refactor

There are some scenarios where lift and shift makes sense:

  • Traditional data center. For many traditional, back-end data center applications, a simple lift and shift can produce distinct advantages in terms of cost savings and improved performance.
  • Newly minted SaaS solution. There are many customer bases that have newer SaaS offerings available to them, but perhaps the functionality or integrated solutions that are a core part of their operations are in the early stages of a development cycle. Moving the currently installed solution to the cloud via a lift and shift is an appropriate modernization step – and can easily be transitioned to the SaaS solution when the organization is ready.

However, there are two more scenarios where lift and shift strategies work against digital transformation progress.

Top 10 Must-Use Apps in Microsoft Teams | AvePoint Blog

  • Established SaaS solution. There is no justification, either in terms of cost or functionality, to remain on a legacy version of an application when there is a well-established SaaS solution.
  • Custom written and highly customized applications. This scenario calls for a total re-write to the cloud in order to take advantage of cloud-native capabilities.

By re-writing applications as cloud-native, companies can slash costs, embed security into those application, and integrate multiple applications. Meanwhile, Windows Server 2008 and SQL Server 2008 end of life is fast approaching. Companies still utilizing these legacy systems will need to move applications off expiring platforms, providing the perfect impetus for modernizing now. There might be some discomfort associated with going the re-platform route, but the benefits are certainly worth the effort.

On your journey to the cloud, choosing the proper implementation partner.

How to Choose the Right Partner for Your ERP Implementation?

When you are planning a move to the cloud, choosing the right partner is critical. Even though it can be difficult to know exactly what to look for, there are things you can do in your search for an implementation partner that can help you make informed decisions and mitigate risks along the way.

Learn to spot a re-badged reseller

Become A Reseller - Business Partner Icon Clipart (#1670043) - PinClipart

With the systemic shift to move IT infrastructure and applications to the cloud, there has been a dramatic increase in the demand for IT consulting services. This cloud economy has precipitated a situation where many vendors and resellers are re-inventing themselves as service providers rather than simply as technology sellers.

Organizations are setting themselves up as cloud service providers despite lacking the necessary qualifications to do so. These re-badged resellers will have a number of flaws including limited experience within the team, limited knowledge about specific industries and solutions and a lack of service-oriented culture. These flaws can put the companies that choose to work with these new services organizations at risk.

They have neither the knowledge nor the experience to deliver specialized, high-value services to customers. They may hire some experienced staff but, without a strong strategic direction set by management and reinforced by an entrenched services culture, they are unlikely to be able to deliver the business transformation organizations seek.

Meet with the people who are actually doing the work

5 scientifically proven ways to be happier at work - Happier

Organizations should beware of partners that introduce high-level consultants to the customer but get junior staff or offshore teams to execute the work. It is important to meet and speak with the team that is actually doing the work. The clarity and effectiveness of communications can suffer enormously when the team doing the work is not the same as the team speaking to the customer.

Come up with a list of demands

As an organization looking to move to the cloud – you may have a lot of questions and having a partner with the focus and experience deep enough to provide a high level of service is critical.

It is important to come up with a list of demands in your search for an implementation partner:

  • A mix of specific technology knowledge and business knowledge so the team can clearly understand the organization’s business imperatives and deliver cloud solutions accordingly
  • A strong physical presence and footprint in the industry with positive customer references, preferably from long-term customers in the same industry as your organization
  • A stable, well-qualified team with significant tenure at the organization, proving that the organization is a genuine player in the marketplace rather than a rebadged product reseller
  • Proven project control and governance methodologies that can be clearly explained
  • The ability to bring senior vendor representatives into any discussion to drive results

Ask questions

A Quick Guide To Asking Better Questions | by Marc Vollebregt | Medium

Organizations should ask the following questions to determine whether a potential partner is capable of delivering a successful cloud service:

  • What is your customer retention rate and how do you measure it?
  • Where will our data reside and what access controls are in place?
  • Is there a dedicated project manager for this implementation and what are his/her qualifications?
  • How will you ensure we have control of the system?
  • How will your team work with ours to ensure project success?

Once these questions are satisfactorily answered, the organization can move to the next stage of assessing whether the partner is suitable.

When it comes to defining a path to cloud, organizations should focus on providing increased business efficiencies, increasing user satisfaction and meeting business expectations, as well as addressing the risks identified. With the right partner in place, organizations can achieve enormous benefits and mitigate those risks.

NoOps automation eliminating toil in the cloud.

How to Reduce Operations Toil for Site Reliability Engineers | by Arun Kumar Singh | Adobe Tech Blog | Medium

A wildlife videographer typically returns from a shoot with hundreds of gigabytes of raw video files on 512GB memory cards. It takes about 40 minutes to import the files into a desktop device, including various prompts from the computer for saving, copying or replacing files. Then the videographer must create a new project in a video-editing tool, move the files into the correct project and begin editing. Once the project is complete, the video files must be moved to an external hard drive and copied to a cloud storage service.

All of this activity can be classified as toil — manual, repetitive tasks that are devoid of enduring value and scale up as demands grow. Toil impacts productivity every day across industries, including systems hosted on cloud infrastructure. The good news is that much of it can be alleviated through automation, leveraging multiple existing cloud provider tools. However, developers and operators must configure cloud-based systems correctly, and in many cases these systems are not fully optimised and require manual intervention from time to time.

 Identifying toil

Toil is everywhere. Let’s take Amazon EC2 as an example. EC2 provides Amazon Elastic Block Store (EBS) compute and storage capacity to build servers in the cloud. The storage units associated with EC2 are disks which contain operating system and application data that grows over time, and ultimately the disk and the file system must be expanded, requiring many steps to complete.

The high-level steps involved in expanding a disk are time consuming. They include:

  1. Get an alert on your favourite monitoring tool
  2. Identify the AWS account
  3. Log in to the AWS Console
  4. Locate the instance
  5. Locate the EBS volume
  6. Expand the disk (EBS)
  7. Wait for disk expansion to complete
  8. Expand the disk partition
  9. Expand the file system

One way to eliminate these tasks is by allocating a large amount of disk space, but that wouldn’t be economical. Unused space drives up EBS costs, but too little space results in system failure. Thus, optimising disk usage is essential.

This example qualifies as toil because it has some of these key features:

  1. The disk expansion process is managed manually. Plus, these manual steps have no enduring value and grow linearly with user traffic.
  2. The process will need to be repeated on other servers as well in the future.
  3. The process can be automated, as we will soon learn.

The move to NoOps

Traditionally, this work is performed by IT operations, known as the Ops team. Ops teams come in variety of forms but their primary objective remains the same – to ensure that systems are operating smoothly. When they are not, the Ops team responds to the event and resolves the problem.

NoOps is a concept in which operational tasks are automated, and there is no need for a dedicated team to manage the systems. NoOps does not mean operators would slowly disappear from the organisation, but they would now focus on identifying toil, finding ways to automate the task and, finally, eliminating it. Some of the tasks driven by NoOps require additional tools to achieve automation. The choice of tool is not important as long as it eliminates toil.

Figure 1 – NoOps approach in responding to an alert in the system

In our disk expansion example, the Ops team typically would receive an alert that the system is running out of space. A monitoring tool would raise a ticket in the IT Service Management (ITSM) tool, and that would be end of the cycle.

Under NoOps, the monitoring tool would send a webhook callback to the API gateway with the details of the alert, including the disk and the server identifier. The API gateway then forwards this information and triggers Simple Systems Manager (SSM) automation commands, which would increase the disk size. Finally, a member of the Ops team is automatically notified that the problem has been addressed.

 AWS Systems Manager automation

Resetting SSH key access to your EC2 Instance through Systems Manager Automation - BlueChipTek

The monitoring tool and the API gateway play an important role in detecting and forwarding the alert, but the brains of NoOps is AWS Systems Manager automation.

This service builds automation workflows for the nine manual steps needed for disk expansion through an SSM document, a system-readable instruction written by an operator. Some tasks may even involve invoking other systems, such as AWS Lambda and AWS Services, but the orchestration of the workflow is achieved by SSM automation, as shown in this table:

Step # Task Name SSM Automation Action Comments
1 Get trigger details and expand volume aws:invokeLambdaFunction Using Lambda, the system must determine the exact volume and expand it based on a pre-defined percentage or value.
2 Wait for the disk expansion aws:waitUntilVolumeIsOkOnAws Disk expansion would fail if it goes to the next steps without waiting for time to complete.
3 Get OS information aws:executeAwsApi Windows and Linux distros have different commands to expand partition and file systems.
4 Branch the workflow depending on the OS aws:branch The automation task would now be branched based on the OS.
5 Expand the disk aws:runCommand The branched workflow would run commands on the OS that would expand the disk gracefully.
6 Send notification to the ITSM tool aws:invokeLambdaFunction Send a report on the success or failure of the NoOps task for documentation.

Applying NoOps across IT operations

Is NoOps the End of DevOps? Think Again | Blog | AppDynamics

This example shows the potential for improving operator productivity through automation, a key benefit of AWS cloud services. This level of NoOps can also be achieved through tools and services from other cloud providers to efficiently operate and secure hybrid environments at scale. For AWS deployments, Amazon EventBridge and AWS Systems Manager OpsCenter can assist in building event-driven application architectures, resolving issues quickly and, ultimately, and eliminating toil.

Other NoOps use cases include:

  • Automatically determine the cause of system failures by extracting the appropriate sections of the logs and appending these into the alerting workflow.
  • Perform disruptive tasks in bulk, such as scripted restart of EC2 instances with approval on multiple AWS accounts.
  • Automatically amend the IPs in the allowlist/denylist of a security group when a security alert is triggered on the monitoring tool.
  • Automatically restore data/databases using service requests.
  • Identify high CPU/memory process and kill/restart if required automatically.
  • Automatically clear temporary files when disk utilization is high.
  • Automatically execute EC2 rescue when an EC2 instance is dead.
  • Automatically take snapshots/Amazon Machine Images (AMIs) before any scheduled or planned change.

In the case of the wildlife videographer, NoOps principles could be applied to eliminate repetitive work. A script can automate the processes of copying, loading, creating projects and archiving files, saving countless hours of work and allowing the videographer to focus on core aspects of production.

For cloud architectures, NoOps should be seen as the next logical iteration of the Ops team. Eliminating toil is essential to help operators focus on site reliability and improving services.

Here’s how a cloud-based platform can assist banks leverage data!

Banks' Inevitable Race To The Cloud

Banks are finding exciting new ways to turn their data into valuable insights. To succeed in this new data-driven world, banks of all sizes are turning to the cloud. Cloud-based solutions provide the optimal storage and tools needed to manage vast data requirements while making data and insights easily accessible for analytics and the business.

Enterprise data and the insights extracted from that data are the fuel for business transformation, and banks are increasingly looking for a platform with robust analytics capabilities, self-service tools, centralized data governance and the ability to meet regulatory requirements.

Banks are in an era where they must modernize or risk failure. Legacy environments have high support costs, siloed IT environments and face challenges in finding skilled resources to support them.

Often, disparate technology stacks prevent the enterprise from being truly connected through data and insights, underscoring the need for an enterprise-wide, integrated solution. To address these challenges, Anteelo and Google are focusing on ways to create an optimal data-centric ecosystem, combining Google Cloud’s native tools with managed services ; thus, simplifying the management of cloud resources and optimizing analytics programs.

Power of the cloud

Power of the Cloud on Your Terms

Much of the power of the cloud comes from creating a single platform that combines management, monitoring and automation features with security and compliance capabilities at the core of banking operations. This foundation, along with access to analytics products that underpin some of the world’s most widely used services, gives banks the ability to extract valuable business insights from large data sets.

Actionable insights that can be gained as enterprise data, such as transaction information and customer behavior trends, is processed using Google Cloud’s automation capabilities — driving operational efficiency, new revenues and the ability to compete more effectively.

Google analytics tools such as Dataflow, BigQuery, Bigtable and Looker enable organizations to advise, implement and operationalize artificial intelligence (AI) to yield competitive business intelligence. Much of this can be automated and integrated into actively managed business processes that ultimately produce tangible, repeatable business outcomes.

Why move to the cloud?

What Does It Take to Power the Cloud?

Analytics from a cloud-based platform can deliver benefits on many different levels. For a chief marketing officer, for example, real-time information on marketing campaign ROI is a valuable metric that is not as readily available or actionable with today’s on-premises systems. Or a chief information security officer could get access to data that improves a bank’s governance capabilities, which delivers benefits in areas such as security and compliance.

Banks are also using open APIs enabled by Google Cloud to authenticate and secure financial communications to customers, enabling a transition from face-to-face transactions to digital channels.

Banks also can use a wide range of cloud-native services for data warehousing and data management. Plus, a wealth of AI and machine learning tools that are integrated effectively can help automate tasks and improve personalized experiences. For example, cloud computing’s elastic capabilities have been shown to cut the training time of data models by more than 50 percent over traditional on-premises systems. Data models converge faster, enabling more rapid delivery of services to markets and enhanced services to consumers.

Another key benefit of moving to the cloud and engaging managed services is that applications can be managed consistently across different environments. Operating in a more efficient development environment means banks can rapidly introduce new products and services to the market and update them more easily.

Banks also have access to high-performance computing (HPC) resources for large processing needs. The use of cloud computing provides banks with modern alternatives for HPC that are manifested in containers and special hardware devices that are purpose-built such as graphical processor units and tensor processing units.

Traditional and legacy on-premises data architectures simply cannot support the variety of data or real-time data streams necessary for advanced enterprise analytics and AI. A modern data architecture is needed. Plus, a cloud environment enables banks to dynamically scale infrastructure up or down to meet demand, which is essential for digital success.

Trust, transform and thrive

Using digital transformation to thrive rather than survive | IT PRO

Progressing data into actionable insights can be visualized as a process following these three keywords: trust, transform and thrive (see figure). Underpinning that process is a cloud platform tuned to improve visibility, security, scalability, speed and agility.

Analytics: Progressing data to actionable insights

Actionable Insights | seoClarity

In this model, data from many different sources, including unstructured and siloed data, is collected and ingested. The built-in security features of Google Cloud Platform keep data secure and help build trust in the enterprise. In a Google Cloud Platform deployment managed by Anteelo, for example, security and regulatory compliance are given the highest priority. Google Cloud Platform’s capabilities in areas such as network monitoring and identity access management help banks maintain a high level of data security and compliance.

A trusted partner

Technology Partners: What a trusted partner can bring to your organization

Choosing the right partner is crucial when moving to a modern cloud banking platform. It is preferable to have a trusted advisor with extensive experience and IT expertise in banking, capital markets and financial services. Anteelo has been providing banking services at an enterprise level for more than 40 years.  Through our unmatched industry expertise we offer a robust set of data services using simplified tools and automation for rapid data acquisition and insights. We help banks identify patterns that can be used to drive business insights in a way that fits their needs. The key is to then prepackage these patterns so organizations can reuse them.

The potential is endless for a cloud platform and the analytics capabilities it delivers to help banks build trust, transform and thrive. A managed cloud platform, purpose-built for banking, provides a consistent surface for accessing the most accurate, up-to-date version of your enterprise’s data. And the technology now exists for analytics to be integrated into daily workflows so banks can extract maximum value from that data.

Developing for Azure autoscaling

Microsoft Azure Review 2020 - business.com

The public cloud (i.e. AWS, Azure, etc.) is often portrayed as a panacea for all that ails on-premises solutions. And along with this “cure-all” impression are a few misconceptions about the benefits of using the public cloud.

One common misconception pertains to autoscaling, the ability to automatically scale up or down the number of compute resources being allocated to an application based on its needs at any given time.  While Azure makes autoscaling much easier in certain configurations, parts of Azure don’t as easily support autoscaling.

For example, if you look at the different application service plans, you will see the lower three tiers (Free, Shared and Basic) do not include support for auto-scaling like the top 4 tiers (Standard and above). And there are ways to design and architect your solution to make use of auto-scaling. The point being, just because your application is running in Azure does not necessarily mean you automatically get autoscaling.

Scale out or scale in

How to Autoscale Azure App Services & Cloud Services

In Azure, you can scale up vertically by changing the size of a VM, but the more popular way Azure scales is to scale-out horizontally by adding more instances. Azure provides horizontal autoscaling via numerous technologies. For example, Azure Cloud Services, the legacy technology, provides autoscaling automatically at the role level. Azure Service Fabric and virtual machines implement autoscaling via virtual machine scale sets. And, as mentioned, Azure App Service has built in autoscaling for certain tiers.

When it is known ahead of time that a certain date or time period (such as Black Friday) will warrant the need for scaling-out horizontally to meet anticipated peak demands, you can create a static scheduled scaling. This is not in the true sense “auto” scaling. Rather, the ability to dynamically and reactively auto-scale is typically based upon runtime metrics that reflect a sudden increase in demand.  Monitoring metrics with compensatory instance adjustment actions when a metric reaches a certain value is a traditional way to dynamically auto-scale.

Tools for autoscaling

Azure Monitor overview - Azure Monitor | Microsoft Docs

Azure Monitor provides that metric monitoring with auto-scale capabilities. Azure Cloud Services, VMs, Service Fabric, and VM scale sets can all leverage Azure Monitor to trigger and manage auto-scaling needs via rules. Typically, these scaling rules are based on related memory, disk and CPU-based metrics.

For applications that require custom autoscaling, it can be done using metrics from Application Insights.  When you create an Azure application and you want to scale it, you should make sure to enable App Insights for proper scaling. You can create a customer metric in code and then set up an autoscale rule using that custom metric via metric source of Application Insights in the portal.

Design considerations for autoscaling

Autoscaling v1 - Azure App Service Environment | Microsoft Docs

When writing an application that you know will be auto-scaled at some point, there are a few base implementation concepts you might want to consider:

  • Use durable storage to store your shared data across instances. That way any instance can access the storage location and you don’t have instance affinity to a storage entity.
  • Seek to use only stateless services. That way you don’t have to make any assumptions on which service instance will access data or handle a message.
  • Realize that different parts of the system have different scaling requirements (which is one of the main motivators behind microservices). You should separate them into smaller discrete and independent units so they can be scaled independently.
  • Avoid any operations or tasks that are long-running. This can be facilitated by decomposing a long-running task into a group of smaller units that can be scaled as needed. You can use what’s called a Pipes and Filters pattern to convert a complex process into units that can be scaled independently.

Scaling/throttling considerations

Autoscaling can be used to keep the provisioned resources matched to user needs at any given time. But while autoscaling can trigger the provisioning of additional resources as needs dictate, this provisioning isn’t immediate. If demand unexpectedly increases quickly, there can be a window where there’s a resource deficit because they cannot be provisioned fast enough.

An alternative strategy to auto-scaling is to allow applications to use resources only up to a limit and then “throttle” them when this limit is reached. Throttling may need to occur when scaling up or down since that’s the period when resources are being allocated (scale up) and released (scale down).

The system should monitor how it’s using resources so that, when usage exceeds the threshold, it can throttle requests from one or more users. This will enable the system to continue functioning and meet any service level agreements (SLAs). You need to consider throttling and scaling together when figuring out your auto-scaling architecture.

Singleton instances

Creating a Logic Apps Singleton Instance – Jeroen Maes' Integration Blog

Of course, auto-scaling won’t do you much good if the problem you are trying to address stems from the fact that your application is based on a single cloud instance. Since there is only one shared instance, a traditional singleton object goes against the positives of the multi-instance high scalability approach of the cloud. Every client uses that same single shared instance and a bottleneck will typically occur. Scalability is thus not good in this case so try to avoid a traditional singleton instance if possible.

But if you do need to have a singleton object, instead create a stateful object using Service Fabric with its state shared across all the different instances.  A singleton object is defined by its single state. So, we can have many instances of the object sharing state between them. Service Fabric maintains the state automatically, so we don’t have to worry about it.

The Service Fabric object type to create is either a stateless web service or a worker service. This works like a worker role in an Azure Cloud Service.

Five major cloud development to watch in 2021

5 Key Cloud Trends For 2021 | Hyperslice Cloud Blog - The latest in Cloud,Tech and Enterprise IT

We rang in 2020 with all the expectations that cloud computing would continue its progression as a massive catalyst for digital transformation throughout the enterprise. What we didn’t expect was a worldwide health crisis that led to a huge jump in cloud usage.

Cloud megadeals have heralded a new era where cloud development is a key driver in how organizations deploy operating models and platforms. In just the past 6 months, we saw 6 years’ worth of changes in the way businesses are run.

There has been a drastic shift to remote work – with the percentage of workers using desktop services in the cloud skyrocketing. Gallup estimates that as of September, nearly 40 percent of full-time employees were working entirely from home, compared to 4 percent before the crisis.

We are seeing renewed interest in workload migration to public cloud or hybrid environments. Gartner forecasts that annual spending on cloud development system infrastructure services will grow from $44 billion in 2019 to $81 billion by 2022.

In light of these underlying changes in the business landscape, here are some key cloud trends that will reshape IT in 2021:

Remote working continues to drive cloud and security services

The Cloud Is The Backbone Of Remote Work

The year 2020 saw a huge expansion of services available in the public cloud to support remote workers. Big improvements in available CPUs means remote workers can have access to high-end graphics capabilities to perform processing-intense tasks such as visual renderings or complex engineering work. And as worker access to corporate networks increases, the cloud serves as an optimal platform for a Zero Trust approach to security, which requires verification of all identities trying to connect to systems before access can be granted.

Latest system-on-a-chip technology will support cloud adoption

Today's System-on-Chip Platforms Have the Power to Drive Edge Computing – Rahi

The introduction of the Apple M1 in November marked the beginning of an era of computing where essentially the whole computer system resides on a single chip, which delivers incredible cost savings.  Apple’s Arm-based system on a chip represents a radical transformation away from traditional Intel x86 architectures. In 2020, Amazon Web Services also launched Graviton2, based on 64-bit Arm architecture, a processer that Amazon says provides up to a 40 percent improvement in price performance for a variety of cloud workloads, compared to current x86-based instances.

These advancements will help enterprises migrate to the cloud more easily with a compelling price-to-performance story to justify the move.

Move to serverless computing will expand cloud skills

7 Reasons Why Serverless Computing will Create a Revolution in Cloud T

More than ever, companies are retooling their operating models to adopt serverless computing and letting cloud providers run the servers, prompting enormous changes in the way they operate, provide security, develop, test and deploy.

As serverless computing grows, IT personnel can quickly learn and apply serverless development techniques, which will help expand cloud skills across the industry. Today’s developers often prefer a serverless environment so they can spend less time provisioning hardware and more time coding. While many enterprises are challenged with the financial management aspects of predicting consumption in a serverless environment, cost optimization services can play a key role in overcoming these risks.

Machine learning and AI workloads will expand in the public cloud

AI storage: Machine learning, deep learning and storage needs

As a broad range of useful tools becomes available, the ability to operate machine learning and AI in a public cloud environment – and integrate them into application development at a low cost – is moving forward very quickly.

For example, highly specialized processors, such as Google’s TPU and AWS Trainium, can manage the unique characteristics of AI and machine learning workloads in the cloud. These chips can dramatically decrease the cost of computing while delivering better performance. Adoption will grow as organizations figure out how to leverage them effectively.

More data will move to the cloud

Moving big data to the cloud - Rick's Cloud

Data gravity is the idea that large masses of data exert a form of gravitational pull within IT systems and attract applications, services and even other data. Public cloud providers invite free data import, but data export carries a charge.

This is prompting enterprises to build architectures that optimize for not paying that egress charge, which means pushing workloads and their data to reside in a single cloud, rather than multicloud environments. Data usage in the cloud can eventually amass enough gravity to increase the cost and consumption of cloud services.

However, as cloud technology continues to mature, organizations should not be afraid to duplicate data in the cloud – that is, it is perfectly fine to have data in different formats in the cloud. A key goal is to have your data optimized for the way you access it, and the cloud allows you to do that.

The journey continues

More sensitive data is moving to the cloud than ever | ITProPortal

So as the world around us changed in unprecedented ways in 2020, cloud computing continued its march to enterprise ubiquity and is a key layer of the Enterprise Technology Stack. Yet, by most measures, cloud is still only something like 5 to 10 percent of IT, so there is still a lot of room to grow. Getting the most out of cloud technology going forward is not a 5-year journey; it may take many years to truly tap into its full potential.

The shift in operational models and the organizational change needed to fully embrace cloud computing is significant. It is not a minor task to undergo that transformation journey. In early 2022, we will most certainly look back and be amazed again at how far cloud has progressed in one year.

Knowing how to use Azure Databricks and resource groupings

6 Reasons to Use Azure Databricks Today

Azure Databricks, an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud, is a highly effective open-source tool, but it automatically creates resource groups and workspaces and protects them with a system-level lock, all of which can be confusing and frustrating unless you understand how and why.

The Databricks platform provides an interactive workspace that streamlines collaboration between data scientists, data engineers and business analysts. The Spark analytics engine supports machine learning and large-scale distributed data processing, combining many aspects of big data analysis all in one process.

Spark works on large volumes of data either in batch (rest) or streaming processing (live) mode. The live processing capability is how Databricks/Spark differs from Hadoop (which uses MapReduce algorithms to process only batch data).

Resource groups are key to managing the resources bound to Databricks. Typically, you specify which groups in which your resources are created. This changes slightly when you create an Azure Databricks service instance and specify a new or existing resource group. Say, for example, we are creating a new resource group, Azure will create the group and place a workspace within it. That workspace is an instance of the Azure Databricks service.

Along with the directly specified resource group, it will also create a second resource group. This is called a “Managed resource group” and it starts with the word “databricks.” This Azure-managed group of resources allows Azure to provide Databricks as a managed service. Initially this managed resource group will contain only a few workspace resources (a virtual network, a security group and a storage account). Later, when you create a cluster, the associated resources for that cluster will be linked to this managed resource group.

The “databricks-xxx” resource group is locked when it is created since the resources in this group provide the Databricks service to the user. You are not able to directly delete the locked group nor directly delete the system-owned lock for that group. The only option is to delete the service, which in turn deletes the infrastructure lock.

A beginner's guide to Azure Databricks

With respect to Azure tagging, the lock placed upon that Databricks managed resource group prevents you from adding any custom tags, from deleting any of the resources or doing any write operations on a managed resource group resource.

Example Deployment

Let’s talk a look at what happens when you create an instance of the Azure Databricks service with respect to resources and resource groups:

Steps

  1. Create an instance of the Azure Databricks service
  2. Specify the name of the workspace (here we used nwoekcmdbworkspace)
  3. Specify to create a new resource group (here we used nwoekcmdbrg) or choose an existing one
  4. Hit Create

Results

  1. Creates nwoekcmdbrg resource group
  2. Automatically creates nwoekcmdbworkspace, which is the Azure Databricks Service. This is contained within the nwoekcmdbrg resource group.
  3. Automatically creates databricks-rg-nwoekcmdbworkspace-c3krtklkhw7km resource group. This contains a single storage account, a network security group and a virtual network.

 

Click on the workspace (Azure Databricks service), and it brings up the workspace with a “Launch Workspace” button.

Per-workspace URLs - Azure Databricks - Workspace | Microsoft Docs

Launching the workspace uses AAD to sign you into the Azure Databricks service. This is where you can create a Databrick cluster or run queries, import data, create a table, or create a notebook to start querying, visualizing and modifying your data. I decided to create a new cluster to demonstrate where the resources are stored for the appliance. Here, we create a cluster to see where the resources land.

Azure Databricks - create new workspace and cluster | SQL Player

After the cluster is created, a number of resources were created in the Azure Databrick managed resource group databricks-rg-nwoekcmdbworkspace-c3krtklkhw7km. Instead of merely containing a single VNet, NSG and storage account as it did initially, it now contains multiple VMs, disks, network interfaces, and public IP addresses.

Quickstart - Run a Spark job on Azure Databricks Workspace using Azure portal | Microsoft Docs

The workspace nwoekcmdbworkspace and the original resource group nwoekcmdbrg both remain unchanged as all changes are made in the managed resource group databricks-rg-nwoekcmdbworkspace-c3krtklkhw7km. If you click on “Locks,” you can see there is a read-only lock placed on it to prevent deletion. Clicking on the “Delete” button yields an error saying the lock was not able to be deleted. If you make changes to the original resource group in the tags, they will be reflected in the “databricks-xxx” resource group.  But you cannot change tag values in the databricks-xxx resource group directly.

Quick Tip:How to prevent your Azure Resources from accidental deletion? – Beyond the Horizon…

Summary

When using Azure Databricks, it can be confusing when a new workspace and managed resource group just appear. Azure automatically creates a Databricks workspace, as well as a managed resource group containing all the resources needed to run the cluster. This is protected by a system-level lock to prevent deletions and modifications. The only way to directly remove the lock is to delete the service. This can be a tremendous limitation if changes need to be made to tags in the managed resource group.  However, by making changes to the parent resource group, those changes will be correspondingly updated in the managed resource group.

Want to reap the full benefits of cloud computing? Reconsider your journey.

Rethink your cloud migration to get more benefits | Linktech Australia

There’s no denying that companies have realized many benefits from using public clouds – hyperscalability, faster deployment and, perhaps most importantly, flexible operating costs. Cloud has helped organizations gain access to modern applications and new technologies without many upfront costs, and it has transformed software development processes.

But when it comes to public cloud migration, many organizations are acting with greater discretion than it might at first appear. Enterprise IT spending on public cloud services is forecast to grow 18.4 percent in 2021 to total $304.9 billion, according to Gartner. This is an impressive number, but it’s just under 10 percent of the entire worldwide IT spending projected at $3.8 trillion over the same period. While cloud growth is striking, it pays to heed the context.

The data center still reigns

DATA CENTER Services - Bluebird Network

In 2021, spending on data center systems will become the second-largest area of growth in IT spending, just under enterprise software spending. And while much growth is attributed to hyperscalers, significant increase also comes from renewed enterprise data center expansion plans. Based on Anteelo Technology’s internal survey of its global enterprise customers, nearly all of them plan to operate in a hybrid cloud environment with nearly two-thirds of their technology footprint remaining on-premises over the next five years or longer. Uptime Institute’s 2020 Data Center Industry Survey also shows that a majority of workloads are operating in enterprise data centers.

Adopting cloud is a new way of life

How Cloud Computing Is Changing Management

Deciding what should move to the public cloud takes careful planning followed by solid engineering work. We are seeing that some enterprises, in rushing to the public cloud, don’t have an exit strategy for their current environments and data centers. We have all come across companies that started deploying multiple environments in the cloud but did not plan for changes in the way they develop, deploy and maintain applications and infrastructure. As a result, their on-premises costs stayed the same, while their monthly cloud bill kept rising.

Not everything should move to the public cloud. For example, many enterprises have been running key mission-critical business applications that require high transaction processing, high resiliency and high throughput without significant variation in demand due to seasonality. In these cases, protecting and supporting existing IT infrastructure investments and an on-premises data center or a mainframe modernization is more practical as moving such environments to the public cloud is complex and costly.

To achieve the full benefits, including cost benefits, let’s not forget the operational changes that using the public cloud requires — new testing paradigms, different development models, site reliability, security engineering and regulatory compliance — all of which require flexible teams and alternative ways of working and collaborating.

The key point: Enterprises are not moving everything to the public cloud because many critical applications are better suited for private data centers, while potentially availing themselves of private cloud capabilities.

How can Anteelo help?

6 ways cloud improves speed and performance - Work Life by Atlassian

With ample evidence that hybrid cloud is the best answer for large enterprise customers to successfully adopt a cloud strategy, employing Anteelo as your managed service provider, with our deep engineering, and infrastructure and application management experience, is a good bet. We hold a leading position in providing pure mainframe services globally and have the skills on hand to help customers with complex, enterprise-scale transformations.

Our purpose-built technology solutions, throughout the Enterprise Technology Stack, can reduce IT operating costs up to 30 percent. In running and maintaining mission-critical IT systems for our customers, we manage hundreds of data centers, hundreds of thousands of servers and have migrated nearly 200,000 workloads to the hybrid cloud, including businesses that use mainframe systems for their core, critical solutions. A hybrid cloud solution is the ideal, fit-for-purpose answer to meet many unique business demands.

The path to modernizing mission-critical applications - Cloud computing news

Customers want to migrate or modernize applications for many reasons. Croda International is a good example, with its phased approach for cloud migration. Whether moving to the public cloud, implementing a hybrid approach or enhancing non-cloud systems, Anteelo’s proven, integrated approach enables customers to achieve their goals in the quickest, most cost-effective way.

The lesson here: Be careful about drinking the public cloud-only Kool-Aid. With many cloud migrations falling short of their full, intended benefits, you need to assess the risks and rewards. More importantly, a qualified, experienced engineering team will not only help design the right plan, but will ensure that complications are quickly resolved — making for a smoother journey.

And most importantly, every enterprise should look at public cloud as part of its overall technology footprint, knowing that not everything is right for the cloud. Modernizing the technology in your environment should not be overlooked, since it may bring more timely results and better business outcomes, including improving your security posture.

error: Content is protected !!